Wednesday, January 10, 2018

Playing along with Sam Harris Part 2

This is the second part of my analysis of the Sam Harris - Eric Weinstein -  Ben Shapiro conversation.

Again, the intent isn't to deconstruct what is said, but respond in real time to the questions Harris raises before other people respond.

https://www.samharris.org/podcast/item/112-the-intellectual-dark-web

PART 2
48:33-52:30  Let me make an objective claim [based upon Eric’s separation of objective truth and objective truth that is “good enough”].  Example of wealth inequality as framed intersectionaly.  We are a confluence of lots of influences.  Once you admit that you either have won the lottery or not, that conveys a sense of ethical responsibility.


I don’t think knowledge entails moral essentials.  I think it is likely that it bifurcates or binormalizes the population.  But even then we’re not quite sure how much overlap there is between means.  To answer the direct question so that comment may make sense for those not on my same wavelength - If I have perfect, or let’s just say lots of knowledge.  That doesn’t mean I must act ethically.  Not to get all Ayn Rand, but I can know that certain anti-social behaviours may be bad for other people and may harm me due to environmental effects or blowback.  But that doesn’t mean self-interest doesn’t win out.  In fact I would suggest self-interest may be a rather utilitarian position.  It may not be consequential (it doesn’t max things for everyone), but other than environmental effects and blowback why should I be concerned about summing across all individuals? 

You can go further down the rabbit hole and say that pro-sociality is a robust long-term good, but you haven’t made the case for any particular level of in-betweeness.  In effect you’re taking MacIntyrean approach to virtue.  Only perfect virtue leads to non-corruptive behaviour and environmental degradation.  Again in this sense I’d refer you to basic anti-utopian arguments. And, I would note the utter religiousness of your position from this frame.

There is no necessary ethical responsibility that goes with knowledge. It may increase the odds. But self-interest is, as per the multi-level selections in me suggests, is always an option.  If we can fool our genes or evolve them in directions we want, there are no guarantees that the solution we get is always going to be darwinianingly selected for pro-sociality.  It may be likely, but it is not guaranteed. Because it can’t be guaranteed you have a faith based position, albeit one based upon better data than random spaghetti monsters, but it is nonetheless faith based.  Thus I don’t see why worrying about the journey isn’t at least as important as worrying about the destination.

54:10  Eric alludes to the idea of social Heisenberg uncertainty.  You don’t say that quark is being unethical right now.  Morality has to do with some high up level.  Something that is not fundamental.  Each level of observable has effective theories.  Free will conversations get stuck here, mixing up effective theories.  Who denies we have free will to get here?  As if free will is good enough.  Computing large stuff just can’t be handled.  Its why self reflection leads to madness.

Again I think Eric Weinstein nails things here.  And I agree with the basic idea that free will emerges due to complicatedness or chaos in holding things in our minds for processing.  And I think we very much need to respect the “good enough” principle.  Assumptions of ever increasing levels of precision and sophistication is a fairly religious principle.  There is nothing wrong with this. But over-extending it and treating it like a foundational meta-truth that all must accept is wrong. It is classic evangelizing. So I think we need to hedge out bets a bit a get out of this metaphysical hubris.

56:50 Eric says, “The fun part of these conversations comes from making these category errors.  The unfun part comes from sorting it out.

Again, I rather like the Weinstein way of framing things. It is nice to see academics who have obviously thought ideas out but who lack the big pride thing some populists obviously have.

57:12 You’re not disputing that you can transition between layers?

I think micro-macro divides do introduce some very fundamental fuzziness.  The propagation of this fuzziness across levels can produce some rather chaotic effects, especially as you scale across multiple levels. Nothing is guaranteed.

57:30 There is nothing about doing dishes that violates quantum field theory [you knowing that your wife will get mad about doing the dishes does not violate quantum field theory and porting conclusions from that level on upward in terms of the free will debate]

 No.  But I do worry about the conflation of forward prediction and backward rationality.  I think Stuart Kauffman did a very good job in terms of these distinctions.

So I guess I’m not really sure what you are asking.  As you say “you can make a smooth transition between layers that doesn’t usurp your understanding of each layer”.  If that is what you are meaning, then I think I would say you can appear to make a smooth transition between layers, but that probably involves some to a lot of self-deception and ex post facto rationalization.  All the knowledge in the world does not necessarily escape you from these issues.  I mean will all the knowledge in the world get you around the Heisenberg uncertainty principle? If not why are we assuming the same for social phenomenon.  

Maybe the idea is that in these compositions or superposition’s of fuzzy influence and fuzzy knowledge there is no guarantee for the correct arrow of morality to emerge. We are putting on bets on the fact that certain directions which we dogmatically and perhaps utilitarianly like are going to emerge as more probable, but I worry that we’re being naive if we aren’t really relying on our own selective history for pro-sociality.  Once you throw out this frame, I’m not sure you’re guaranteed to be grounded the way you think you are.  

That’s the classic problem with utopianism and transformation.  Its why current Great Awakenings like we’re now in aren’t as predictable as everything thinks.  After all, there is no rational way Trump gets elected, or we knowingly vote in a patently crazy Kim Jong Un in a democracy? Right?

58:10 What I’m interested in a first principles formulation of moving forward into the unknown

I guess I really like Stuart Kauffman here.  Let’s lose some of our own hubris about the rightness of our way, value the essential aspects of our human group nature and appreciate the value and insights each perspective has.

But I know you want specifics.  I think as specific as I can get is to say, don’t move too fast and don’t move too slow.  Keep things in conversation and don’t try to push one-size fits all solutions.

Now I know you’re rejoinder is going to be, “should we respect despotic Khmer Rouge dogmas”.  Great question, but I think it misses the mark. There is a zone of likelies and efficiency.  I think we all take that for granted. But that doesn’t imply that ever increasing narrowness is inevitable or that a certain direction is deterministic.  Complexity tends to rear its ugly head and there may be a zone of probabilities that produces black-swan superpositions that utterly destroy your precision. In that regard I would say I’m rather Peircean.  We can do better but as Eric says there may be a point where it is just good enough.

My issue with what I see as your approach is that it makes some foundational mistakes about precision and accuracy.

So what do we do? In the precise long-looking sense I don’t think it is wrong to say we don’t know. But as per David Snowden and his classic Cynefin framework for organizational solutions to these very questions, I would say keep exploring and don’t get so caught up in our need for absolute precision.  That is itself a very big crutch I think we have to get over.

People need certainty in order to motivate and unify.  The problem is we tend to provide false utopian certainty almost no matter what and no matter our good intentions.  We get back to the issue that we need a little bit of uncertainty and quasi-propositionality.  And here I think you suffer the same flaws, or rather your approach is subject to the same flaws as anything else.

58:35  What I object to about religion is that there was some prior century where we were given the best ideas we could ever have.  You can either locate yourself in a current modern conversation or anchor yourself in an ancient one.

I just think that is a non-sequitur which assumes religion and religious interpretations don’t change and evolve.  I think your direct reference to revelation brings in some connotations, at least from a Mormon perspective, about this.  Revelation opens the door to change and re-interpretation.  Sometimes it is gradual, as per Catholic systemic theology.  Sometimes it is dramatic as per the protestant revolution, or the emergence of any of the world religions.

The idea that only rationalism allows updating of belief just is non-sensical.  

I think what we are arguing about is the relative rates of change.  You seem to be saying more flexibility is ideal.  Canonical religions, or perhaps even religions of any type are too laconically and introduce uneeded supernaturalism and too much quasi-propositional and false belief.

In that I think we just have to disagree.  I think the rate of change should be informed by genetic tendencies and how easy they are for the populations as a whole to change them.  Push too fast because a particular sub-group can handle and excel with loose mores and you also have to watch out about the unintended effects this sub-group and its systems may have on the larger group as a whole. I guess I would sum that up as saying, respect the whole, but don’t be afraid to be different or to advocate for your differentness, but make sure you lose your hubris and learn the skeletons that come with systemic change and moral unfreezing.  Naivete here is just as bad as blind faith.

59:40 We need to get to a common humanity that removes our religious provincialism

Well, I think you just walked into your utopian trap here.  If only everyone would just follow religion X.

But, that is an easy polemical attack that I don’t think is fair to your position.  But, I do think it is the fundamental flaw in your position.  But let me address it directly rather than in terms of its foundational assumptions.

I’d frame what you are saying as a need for us to move to a higher group level - cosmopolitanism if your will.  By moving to a higher group level we should, as per Pinker’s Better Angel’s of our Nature have quite a bit of conflict reduction.  And yes, I know that Pinker would never invoke group selection.  

That certainly might be true.  Just because you have a larger group does not mean that you don’t have sub-group competition.  In fact, I suspect what you find is, as per Richerson, Cordes, Boyd, to sustain a larger group level usually entails loosening of moral norms and the permission of larger levels of freeloading which is offset by the selective advantages of group size, economies of scale, and group directed altruistic benefits by core groups of altruists.  In other words, your idea of a one-size fits all hyper-rational moral theology may be a bit naive. It does not necessarily entail the rational society and dynamics you envision.  You may jettison supernaturalism, something I have absolutely no problem with and suspect is inevitable to one degree or another.  But I don’t think rationalism guarantees the type of behaviour and dynamics you envision.  I think such reasoning is a fallacy.


But that aside, I think what you need is time for gene-culture elements to evolve such that a transition to a new evolutionary level is stable.  That involves, extreme dependency, conflict minimization tools, and coordination.  I don’t think the language problem you allude to is significant enough to make many gains on the coordination level.  I don’t think it induces any extreme dependencies, except for ancillarilarly like how the EU is now stuck to somewhat of a common fate.  And, most importantly enough I don’t think that it causes the development of new conflict minimization tools any more than any other utopian solution proposes to do.  But I will admit that it does make some progress.  But I don’t think the solution is structurally significant enough to do what I think you need it to do.

No comments:

Post a Comment