Michael Levin and John Vervaeke on Free Will and Character Formation
My buddy John Munger introduced me to both of these heavy hitters. First there was John Vervaeke with his Awakening From the Meaning Crisis YouTube series, which I’m still working through and it’s been excellent so far. Then there was Michael Levin with his Lex Fridman conversation on Biology, Life, Aliens, Evolution, Embryogenesis, and Xenobots. His experiments on planarian flatworms are absolutely fascinating.
Some people fear the YouTube algorithm, but I dig it, because it usually suggests things that I like. Which brings me to this conversation between Michael Levin and John Vervaeke, hosted by Karen Wong. Watch the full video here.
I’m not quite sure yet how I feel about their free will discussion. But I like the character discussion that it leads into. And I wholeheartedly agree with Aristotle’s ideas around this.
When I ponder character formation, I feel a mix of emotions. On one hand, I feel terror, because the bad habits I have will never go away unless I intentionally put effort into stopping or replacing them. On the other hand, I feel hope, because I know the good habits I have will stay with me, and there is always the possibility to establish more good habits.
If nothing else, this conversation really hammers home the importance of an intentional daily routine. As Annie Dillard once said, “How we spend our days is, of course, how we spend our lives.”
John: … I know Michael you’re well aware of the long standing freewill-determinism debate. And it sounds to me like you might be actually separating the phenomenological sense from self-determination. Or maybe you aren’t. And that’s the question I’m asking you. Did that question make sense to you?
Michael: Yeah, it made complete sense. And I’m on board with your definition. Here’s how I visualize this. For example, in one of Daniel Dennett’s early books on free will he basically points out the following, which is a very simple logical distinction: When you zoom into any event that happens, there are only two possible things that we know about. One is determinism, meaning it was caused by some previous event. The other is true quantum randomness, meaning in a fundamental sense it is a-causal, so a particular particle decays, or it doesn’t, and there literally is nothing prior that explains this. In the ensemble there’s statistics, but for each individual event, that’s it. We know of nothing else. And his argument is well neither of those things sounds like what we want from free will. You don’t want your life to be determined and you don’t want it to be random, that’s not free will. Therefore, our concept is sort of like Santa Clause, it’s not a coherent concept.
So, here’s my take on this. I think he’s right of course if you zoom in. If you zoom into physics, no big shocker, all you see is physics. However, what I think we really mean by free will (and free choice and free action and all that stuff) is something that only makes sense… well this is still all under development so I’ll probably disagree with myself a month from now… but for now, what I think we actually mean by a free action, is not anything that can be applied to a temporally narrow event. I think that free action is stretched over long periods of time. So, the way I imagine it is this: imagine a bunch of slices, like the way they would do in special relativity where time is slices of this bread. The deal is that in any one slice, whatever is going on is determined by things you have no control over. I mean, what is the next thought you’re going to have? You don’t control your next thought. Your next thought is whatever pops up. At the moment, that’s what pops up. However, over long periods of time, which you have control over, is the statistical spectrum of the kinds of thoughts you’re likely to have.
So, your free will is exercised over years, not because hey I was able to choose my next thought, that’s not under your control. But what is under your control is if you apply consistent effort, whether through some sort of contemplative practice, and many traditions are on board with this, what they’re saying is, the important thing is consistently showing up and not getting tied into any sort of win or loss over your wild mind at any one point. But it’s that you show up, do the thing every day, and eventually you will alter yourself, you will alter your cognition to be something that has more of the right kinds of thoughts. Maybe you’re studying and you’ll be smarter, maybe you’re in anger management and you’ll be less impulsive. Whatever. But it isn’t anything you can control now. It’s consistent effort. And the way I picture it is we’re a collection of these self-lets, right, we are not full selves at any slice of time because mostly what’s in charge there is your experiences to-date and your body chemistry and whatever you had for breakfast and the twinkies you’ve been eating to screw up your head. So that’s you on a very short-term basis. But on the long-term basis you become a self, like they do in calculus, you integrate these infinitesimals, so very infinitely tiny things, when you integrate them, the curve will eventually end up as something that’s not zero. Something that’s got some depth to it.
And you can think about how that may play out in social notions of blame, right. So, someone might say, I wasn’t able to do anything about this. Yes, at the moment, that’s true, but over the last year, it was on you to do x-y-z so that you could have done this, right. It’s sort of that kind of notion. At the moment, of course, having not prepped for this at all, you couldn’t have done anything else, but what you’re actually to blame for is failing to do certain things over time, or in the case of drug addiction, doing things over long periods of time that get you to a point where yeah, of course, in that moment you couldn’t have done anything different. So maybe the value judgement isn’t about what you did at that moment… that’s the part everybody is freaked out about, that neuroscience is going to screw up this notion of blame because we’re going to get to the point where everybody will say that’s what my neurotransmitters were doing, what do you think I could have done. At the moment, that’s a fair story, but the question is, how did we get here?
So that’s my current version of the notion of free will. It’s not something that applies when you zoom into space and time. It only applies to an extended being that has the opportunity every day to make tiny little choices that eventually will modify your structure, or not, in the right way.
John: I think that’s really good. It’s a 4E Cog Sci answer. You have the notion of extension, extendedness from different temporal slices of the same person. Shaun Gallagher makes exactly the same argument explicitly in Enactivist Interventions: Rethinking the Mind. That we’re looking at the wrong scale of free will. That we’re looking like the way Sam Harris does in his book, that sort of the moment to moment thing. I mean if you go moment to moment – David Hume – you can make causation go away. If you go moment to moment – Zeno – you can make time and emotion go away. I mean we know that this method has its flaws and can make things seem illusory that would be fundamentally detrimental to any kind of realistic understanding of the world. So I agree with that.
I think about hyper objects on different time scales like evolution. Like, moment to moment I can’t see it! I mean that was one of the silly refutations that people initially made to Darwin’s theory. Well, science is based on observation, and I can’t actually observe evolution occurring. I can’t see it or touch it. So therefore, it’s not real. That’s ridiculous. That makes no sense. You’re not looking at it at the correct scale.
Now if we admit that, which I heard you saying, and you’re nodding so I take it that I’m reading you okay, then it sounds to me like you’re invoking, and this is my response to Gallagher I made when I was talking about this to Dan Chiappe, I mean you’re bringing back Aristotle’s notion of character. This was his notion, right. That it’s not the moment to moment, but our abilities at self-determination and self-organizing can modify the virtual engines that select and enable constraints at multiple levels that shift around the probability space, which is what I hear you saying, the dispositional space, so that we’re more and more likely to behave in a virtuous fashion. That’s character.
Then one of the interesting notions of rationality that we’re largely lost due to the dominance of the computational model of rationality, is that rational agents are agents that can be held responsible for their character. Which is what you’re invoking in the court of law. It’s like no no no, I get the moment to moment thing, but the thing that’s actually relevant here, notice my word, is how have you been cultivating your character? And I’m allowed to hold you responsible for how you’ve been cultivating your character because to some degree you’re a self-determining, self-organizing thing. It doesn’t mean there’s a ghost in the machine or anything like that, but you do have character, and so I was wondering what you thought about that as an important dividing line. It seems to be that that’s a distinction we make when we bring in the notion of a person. Which is where the whole debate around free will usually lands. But the idea is a person is an entity which can be held responsible for its character because the most causally relevant explanation of a lot of its behavior is the degree to which it’s generated its own character.
Michael: Yeah, exactly right. Notice on a practical level there are two pieces to this. That view is exactly what you want in terms of practical judgements from the legal system, because however it is that you shaped your cognitive apparatus, that’s a pretty good guide to what you’re going to do in the future. The point isn’t that oh my God there was a meteor strike and now because of that this happened and that’s never going to happen again. No, if we can tell a story about why what you just did is a reasonable feature of your cognitive apparatus, that’s really what we want to know. How likely is it that this is going to happen again? Then based on that, we make some decisions on what we’re going to do.
The other aspect to this, is the extent to which any of this makes sense, is strongly influenced by the extent which your Kant-specifics, you know like other humans, are paying attention to this and that serves to modify their behavior in society. That’s what you kind of want. Because if you’re dealing with creatures that are not able to modify themselves based on experiences, there’s no point in rewarding or punishing anybody, it’s not going to serve any kind of function, so you can sort of wall-off the ones who are behaving a-socially. But there’s no notion of you’re doing this to improve things in the future if they can’t learn. So, to be a person one of the things you have to be able to do is observe that and say, okay, I see how we’re all doing this and I’m going to take actions that are more pro-social.
So, the capacity for those is very practical. If you’re dealing with a bunch of snails or something, that’s just never going to impact their future behavior. It’s just not going to happen.
John: Exactly. Now I want to put the two together. So, you can have increased sensitivity to causal relevance and then you can have a character that determines your sensitivity and your responses. And part of that is what it is to be rational, is to be responsible for your character. Couldn’t we think now therefore there’s a proper place for wisdom as the cultivation of character that enhances one’s capacities for zeroing in on what is most causally relevant in an optimal and ongoing fashion.
Michael: Yeah, I think that’s right on. From an eastern point of view, a few colleagues and I just published something on the expansion of this notion of this Buddhist vow that is basically the commitment to enlarge your cognitive light cone. Right, you’re going along you know during evolution and your light cone is getting bigger and bigger but at some point, you get sophisticated enough that you can actually sort of circle back and realize that and say I am now going to put effort, not towards specific goals, but towards the meta goal of being able to have bigger goals. Like, right now I’m capable in the linear range of caring about the well-being of 10 people in my family and that’s about it but I am now going to work on expanding my cognitive system so that I now can have, in the linear range, compassion towards thousands, millions, you know all of them.
So, they have a concept right. These eastern traditions have this concept of literally committing yourself to the goal of being able to have bigger goals. Specifically with respect to compassion let’s say.
John: Yeah. Wow, that’s excellent. I’d like to read that. That sounds like a profound convergence with what I just proposed to you. It’s interesting how we stepped out of sort of a strict linear hierarchy, but we were nevertheless able to say, I think, together, good theo-logos and co-emergence by the way, we were able to say together important things about these different levels that I think are very relevant, well, for me, towards addressing the meaning crisis. Because the tightness and the continuity that we were uncovering there really helps to re-home us back into the scientific worldview. I mean we were able to talk about persons and wisdom and character in a way that didn’t sound like we were dusting off old medieval scholasticism but something that weaves very tightly into the emerging scientific worldview …