In my previous post I laid out what I did and did not mean by the term “rationality”. While I addressed what I consider to be misconceptions around the word rationality and how self-described rationalists would behave, I do think that there are some common problems that real-life rationalists run into in practice. In this post I want to discuss some of what these failure modes are, and what generates them, in the hope of helping others to recognize and avoid them.
The Crusaders
“That which can be destroyed by truth should be.” – P. C. Hodgell
This quote is greatly admired by our rationalist community, as you might expect. Given our aesthetic preference for truth, we want the divine light of evidence to burn away all of the unclean falsehoods that lurk in the unexamined parts of our minds… For those who value truth above all else, this may in fact be the best course of action to apply to their own mind. (The resulting structures formed by this procedure also have an attractive property: that they are robust to reality – revealing known true information cannot damage them, unlike many of the social constructs we pretend exist.)
Our friend Michael Vassar has a great response to this quote: “That’s like saying anything that can be destroyed by lions should be.”
While this is a great idea for people who value truth and wish to have true beliefs in themselves, they sometimes decide that the divine scouring light should be shone upon everyone and everything (e.g. atheist movements tend to spend a lot of time arguing with and bashing religious people). Sometimes this can be downright destructive: imagine that you are deconverting a young child surrounded only by true believers with no other support network, or you’re deflating an overconfident entrepreneur’s ego and they give up on their big idea. Some may argue these would still be good outcomes, but that’s not my view. In most cases though, this habit is simply annoying to other people. Their cherished beliefs are not, in fact, up for investigation. Getting on a high horse and trying to show others the way is much more likely to result in social rejection than any shift in beliefs. Humans are social animals, and not being able to cooperate with others means you’re going to have more friction in everything you try to accomplish.
Tearing Down and Building Up Beliefs
One failure mode that I find particularly galling is when rationality is only used to destroy false theories, and never gets used to look at the evidence and pinpoint new, true beliefs. This is particularly common among folks labeled as “skeptics” – they learn all about cognitive biases and then use them as a fully general counterargument against any speakers they disagree with. They are eager to deflate anyone else’s bad ideas with their considerable arsenal of epistemology, while never taking a stand of their own. Don’t get me wrong, being able to spot the flaws in someone else’s reasoning process is critically important – the failure mode comes into play when we don’t evaluate the evidence ourselves, the best we can, and take a stand on what the truth really is.
You can think of this as the positive and negative side of rationality. The negative side of rationality is about identifying systematic mistakes in the reasoning process, and knowing how to correct for them in yourself or others. The positive side of rationality is about learning how to evaluate evidence in an unbiased fashion, it is to create a reasoning process that will systematically converge on truth, asymptotically. This is admittedly very hard to get right. It takes a lot of social daring to make a stand, particularly when it goes against the established consensus. If you take a strong position and you’re wrong, you will have a lot of explaining to do. Using your techniques only to tear down the ideas of others is the safe thing to do, but in neglecting the other half of the art you will never fulfill your full potential.
It’s also worth pointing out how this works on a micro scale as well – the typical word for this is “nitpicking”. I can’t tell you how many times I’ve been having a conversation with a group of rationalists, I make a statement that is true in 98% of cases, and several people start piling on with examples of the other 2%. Believe it or not, I did already think of those exceptions! It takes a lot more time to spell out the general case and every possible exception, than to just state the principle. When trying to make many sequential steps in an argument, this begins to really add up. Furthermore it creates a social chilling effect, where people make much more conservative or long-winded statements in order not to be wrong in any detail. So don’t do that, okay?
The Baby in the Bathwater
One side effect of having an aesthetic preference for truth being hooked up to our disgust reaction is that one untrue belief can cause a contamination of all beliefs in its vicinity. The common phrase that applies here is “throwing the baby out with the bathwater”. When we come across a justification that is patently untrue, our emotional response of disgust will trigger an innate module in our brains that keeps track of contagion, and gets us to stay well away. This attitude does make a lot of sense from the perspective of building up a large and fully justified network of beliefs, something that we can verify is true to the best of our understanding, and that we can extrapolate from while preserving that truth. Certainly we don’t want to have any false beliefs in that model, because every conclusion that flows from it really is contaminated by falsity.
Unfortunately, this process will also generate a lot of false positives. Just because someone gives a bad justification for a belief does not, in fact, make that belief false. Furthermore, there is usually a reason that someone has located that particular hypothesis, even if their conscious deliberative module is only running on rationalization hamster power. Beliefs that do not correspond to reality and also do not provide any sort of fitness to their host tend not to survive for very long. This suggests that there is some kind of instrumental truth underlying that belief, just not the one that the belief’s proponent claims. Remember, the first virtue of rationality is curiosity. As rationalists, we should be able to cut through the disgusting justifications and try to understand the phenomenon itself. If this belief is causing someone to win, we need to figure out how and why and incorporate the right and true version into our model as quickly as possible.
Phonological Loop Takeover
You have probably seen the kind of people I’m talking about – there’s something that’s just a little bit off about them, or maybe even quite a bit off. Their movements tend to be slow and jerky, instead of smooth and flowing. Their speech also seems to be like that too, for that matter. Their vocal inflection doesn’t seem to line up with the words they are saying, and they certainly don’t seem to exhibit volume control. Perhaps worst of all, they don’t seem to be responding to social cues in the moment, and this creates many many frictions in their interactions. What is going on here?
These are some of the cluster of symptoms that I affectionately refer to as phonological loop takeover, though I think the term is a bit confusing. The deliberative reasoning part of our brain appears to be located in our prefrontal cortex, a region which is greatly increased in humans relative to other animals. It seems to control behavior largely through inhibitory processes, which is a crucial piece of the puzzle. Also, have you noticed that people tend to talk about what they should do, but then actually do something else entirely? This region seems to have very close ties with our verbal behaviors (much closer than with our motor cortex), so close in fact that I sometimes even equate the two as I do above. (For Overcoming Bias fans, this is Robin Hanson’s far mode, and the social advantages of this probably drove the evolution of the human brain.)
People who tend to become rationalists also tend to identify themselves with the activity in their prefrontal cortex, with all of the knock-on effects. These regions of the brain are relatively stronger, including those crucial inhibitory processes. They’re not just inhibiting the behaviors they don’t want, there is more inhibitory activity overall, and this results in the jerky, second-guessing motor and speech patterns. Furthermore, as is usually the case involving identity, those parts of themselves that they don’t identify with (e.g. the entire rest of the brain) tend to get pushed into the background and ignored. These people do experience emotions and do pick up on social cues and do have a finely-honed intuition, but they can’t quite get the part of the brain that’s in control to acknowledge them and act on it.
As far as I am concerned, this is the most pervasive and difficult error to which rationalists tend to fall victim. I used to be much further down this spectrum than I am today, without a doubt. So much of my past few years of growth has involved reintegrating all of those other functions into my overall cognition. I know it is possible to be different, and to be so much more effective than we currently are – and that is one of the missions I have for this blog.
Round Three
For most of the people reading this blog, I probably haven’t made what you would consider a good case for rationality. First I separated rationality from its cruft of old meanings and misconceptions, and left the gem of a promising idea. Then I talked about the ways rationalists tend to go wrong in practice, what pitfalls and bad habits they get into as a side effect of being the kinds of people who find and adopt rationality on their own. In truth, I have not yet made my case for rationality. What I have been doing is setting the stage. I believe that rationality, when applied to human beings with an understanding of human beings, is one of the most powerful skills you can have. In my next post, I will propose my vision of the rational human being.
Leave a Reply