Rationality, Unpacked

The word “rationality” carries a lot of historical baggage and cultural misconceptions, enough so that I have considered not using it at all. Yet a substantial portion of my social circle has decided to adopt this label (spoiler alert!), and for better or worse, it is the label that I use in my own mind. First I am going to address what rationality is not, before talking about this definition of rationality and why we should care about it.

Cartesian Rationality and Axiomatic Systems

The first widespread use of rationalism was a philosophy espoused by Descartes back in the 17th century. In this sense, the opposite of rationalism was empiricism. Rationalism as a philosophy in its extremest form holds that the only source of knowledge or justification is through our own reason. Descartes himself tried to derive all of the “eternal truths” of mathematics, epistemology, and metaphysics through the single starting assumption of cogito ergo sum – I think, therefore I am.

While not every thinker believes that reason is the only source of knowledge, it does have the connotations of conscious deliberation being the primary source of knowledge, or morality, or action. Even a rudimentary reading of cognitive science clearly shows that our brain is a massively-paralleled and mostly unconscious processing machine, with a very small deliberation module attached on top (and particularly connected with verbal processing). Anyone hoping to utilize their reasoning needs to understand where it comes from and what purpose it serves, to avoid deluding themselves and going horribly wrong.

Another technical use of the word rational comes from the Von Neumann-Morgenstern axioms of utility theory. They proposed four axioms that seemed reasonable, and from those derived the existence of an ordinal utility function. Rationality, then, was equivalent to taking actions to maximize this utility function. (In practice it seems like humans do violate these axioms, and indeed, violating them can lead to cases where you reliably lose.) This model has become closely associated with economics, and particularly the self-interested homo economicus model of human behavior. The term “rational self-interest” got picked up by the Objectivists, and subsequently the word “rational” has been associated with the philosophical ideas of Objectivism. This is not what I am talking about either.

Rationalization Hamsters and Straw Vulcans

There are two more connotations of rationality that are more modern in origin and worth dispelling. One is the association with the term rationalization. On one hand, it can mean the process of turning implicit rules into explicit ones (whether and in what circumstances that is a good idea is the subject of another post entirely), which on its face doesn’t seem to have negative connotations. Of course, the more common parlance refers to the process of coming up with elaborate justifications for your actions and/or existing belief structure.

Perhaps this is even the most appropriate connotation for rationality, if reason did indeed evolve to win arguments and persuade others. After all, someone yelling and waving “reason” around in your face doesn’t likely have your best interests at heart. However, that doesn’t mean we can’t hijack this evolutionary argument module into doing real work for us. The trick is in noticing these kinds of behaviors in yourself, and redirecting your faculty of reason accordingly. Once we develop the capability of self-reflection, we can begin to influence our behavior towards the things we care about.

The other modern misconception of rationality was affectionate dubbed by TVTropes the “Straw Vulcan”. Named after the straw man argument, where you create a ridiculous and obviously incorrect version of the other person’s views to easily knock down, the Straw Vulcan is a stereotype of rational thinking that has been repeatedly put forward in fiction. Julia Galef gave an excellent presentation on this, with the following summary:

  • Being rational means expecting everyone else to be rational too.
  • Being rational means you should never make a decision until you have all the information.
  • Being rational means never relying on intuition.
  • Being rational means eschewing emotion.
  • Being rational means valuing only quantifiable things — like money, productivity, or efficiency.

Wrong, wrong, and five times wrong! A perfect rationalist would be able to account for everyone else’s biases, make snap decisions on incomplete information when needed, using intuitions and emotions as a valuable source of computation and information, and maximizing an actual human’s utility function.

Two Meanings of Rationality (And Why We Care)

Rationality is commonly divided into two domains: epistemic and instrumental.

Instrumental rationality is about accomplishing your goals, for any arbitrary set of goals you may hold. This is the form of rationality that most people find intuitive, and uncontroversial. Indeed, this seems to be what everyone is trying to do on some level, even if you disagree with another person’s goals or how they go about achieving them. So in that sense, why do we even single out this concept at all?

I have gotten enormous mileage out of explicitly acknowledging what my goals are, and then thinking about how to best achieve them. The default path is to be a local hill climbing algorithm, making incremental steps towards vague and distant goals that are mediated by momentary emotional responses. I didn’t figure this out until the end of my college career, but since then my life has been drastically different. When I identify a goal, I figure out the intermediate steps via backward chaining, determine what resources and skills I need to accomplish it, and then come up with immediate steps I can take right now to acquire them. Over long time horizons, this even begins to look like having agency.

Epistemic rationality is about having true beliefs about the world. True in this sense means a correspondence between physical objective reality and your mental model of that reality (if you’re not on board with these basic concepts, please press CTRL-W now) – a helpful metaphor for this is the map versus the territory. The aspiring epistemic rationalist is always trying to gather data about the world, and see whether that matches his internal model or not, and then update accordingly.

Ask yourself now whether you value having true beliefs. Just about everyone I ask will at least claim to hold having true beliefs as a value. At the same time, their revealed preference is to maintain their current belief structure. Given that we are unlikely to be correct about everything, an unwillingness to update runs counter to this value. In practice, humans have many different values, and we have to trade them off against each other either implicitly or explicitly. The values of social conformity and coordination through holding a predictable identity tend to win out. I actually do believe that in many cases, in many contexts, this is in fact the right instrumental choice.

So why then would anyone become a rationalist in the first place? I do think there is an instrumental case to be made for epistemic rationality: insofar as you want to accomplish goals in the real world, having a tighter correspondence means you will make fewer mistakes when interacting with reality. When your margin of error is especially small, you can’t afford to be wrong about reality – look at how we build bridges and skyscrapers, how we got humans to the moon and back alive, how we can launch a probe at a comet and hit it exactly when and where we anticipate…

But I don’t believe those concerns are what motivate most people who identify as a rationalist to seek truth. When we find ourselves bumping up against the laws of physics we do what we have to do, but that doesn’t mean we then go about seeking any and every truth we can find. Instead we need to search for an emotional motivation. My best answer right now is that it boils down to having an aesthetic preference for truth. Untrue statements just feel more disgusting, contaminating, to some people. An important variable here is likely someone’s level of agreeableness: if we are strongly inclined to agree with other people, we are going to be strongly discouraged from independent exploration and validation. What is cause and effect and what exactly we’re measuring here is still an open question in my mind.

What’s Next

In this post I wanted to get on the same page with you about what I mean by rationality. This involves both negative and positive definitions. My next post is going to be about what rationalists are like in the wild, particularly what kind of challenges they face and what failure modes they are sometimes prone to. The following post will then present my alternative paradigm, what the model of an ideal rationalist looks like when implemented on the wetware we find ourselves inhabiting.

  • ‘An aesthetic preference for the truth’ -> these are the exact words I’ve used about myself since I realized that not everyone cares for the truth, that truth is elusive, and that often, not caring about the truth works just fine or even better.

    • Julian R.

      Indeed. Totally stealing that.