Developing Object Permanence Around Flinches

Many years ago, I did an exercise where I made a list of thoughts that I flinched away from. Then, I made spaced repetition cards with the thoughts.

The cards were statements like: “As of March 2009, I am currently uncomfortable with the idea that quitting my job might be the right move.” (Totally fake example to communicate the format.)

I think it was a really useful exercise, and it’s pretty easy to implement, and I basically recommend it to people.

I don’t think the part about spaced repetition software specifically was all that important–I think the idea was that I developed something like object permanence around these mental flinches of mine, and that was the way I accomplished that.

If you try this, I wouldn’t try to force yourself to consider the uncomfortable thought at the object level. I would try to internalize that you are in fact uncomfortable considering it at the object level, and maybe meditate on possible cognitive chilling effects of that situation.

Because, in my experience, human brains are pretty good at back-propagating these flinches, and that can cut off a lot of otherwise useful thought. (The linked article is very good, but includes a framing and approach that are, IMO, importantly different from what worked for me. YMMV.)

Instrumentally Caring Intrinsically

A way of thinking that I’ve been using for a few years now, but I don’t think I’ve ever written up, is the idea of instrumentally caring about things intrinsically.  

Caring about something intrinsically is often very useful for coordinating with others.

When you care about something for its own sake:

  • It’s easy to strongly and coherently signal that you care about it.
  • People (rightly) expect that your caring will be fairly stable.
  • Your intuitions, aesthetics, and gut feelings will be aligned in such a way that you can act on your caring in realtime.
I remember being an overly analytical kid who wondered whether there was something fundamentally incoherent about caring about things other than my own sensory inputs. I’ve now come around to the opposite idea. Intrinsically caring about only my sensory inputs is incoherent–there’s a lot of utility that caches out in the form of cool sensory inputs that you can only unlock by intrinsically caring about things other than sensory inputs.

I see a lot of conversations break down when people can’t, or won’t, justify why they care about something. And I think there is something that can be a little “off” about trying to come up with justifications for intrinsic values, and in my experience it can actually mess up people’s epistemic to try. Then again, if the things you care the most about become semantic stop signs, I believe you’re leaving a lot of value on the table.

Instead, when these types of conversational roadblocks come up, I recommend people shift to discussing what’s good about caring about something intrinsically. 

A while back, someone on my facebook feed stirred the pot by doing a cost benefit analysis (IMO reminiscent of David Friedman’s stuff) of whether to call the cops on a bike thief. He got some pretty strong pushback from people who implicitly rejected his frame and said stuff like “fuck bike thieves”. 

According to me, the right way to continue the conversation at that point is to ask how the world looks when we do cost benefit analyses of reporting bike thieves vs. how it looks when we become morally outraged when we see bike thieves. This way, neither party is required to directly invalidate their sacred values by the things they protect at the object level, and the people can actually exchange information about their worldviews and whether they disagree with each other’s.

The Heritability of Everything

The gold standard in heritability estimates is the twin study, which involves looking at identical and fraternal twins, raised together or apart. This allows the cleanest test of decomposing the variance in observed traits into genetics, shared environment (factors equally affecting all children raised together), and non-shared environment (everything else, including random noise) contributions.

Generally it is assumed that the effect of parenting is equated with the shared environment, though there is clear evidence that parenting can differ substantially between siblings of the same parents and account for a significant fraction of non-shared environment, and the shared environment by definition also captures e.g. the neighborhood in which you grow up. Generally there are many caveats to apply to heritability estimates, particularly that they are only defined within a given population and may not apply as well in extreme cases, but nonetheless they are our best estimates as to the effects of genetics, and the effect is undeniably large.

An extremely ambitious meta-analysis of all twin studies was published in May 2015, reporting heritability estimates from 2,748 studies featuring over 2 million twin pairs, encompassing virtually every published study to date. The researchers have made a data visualization tool available if you wish to dig down into various aspects of the study, though it’s fairly opaque if you’re not familiar with the field’s jargon.

Across very broad domains of health outcomes, almost everything falls within the 40-60% heritability range, with cancer as a representative example being 46% heritable. Similarly, neurological variables show about 50% heritability (with little shared environment involvement), while cognitive and psychiatric outcomes are similarly heritable, but also have a nearly 20% shared environment component. Social values appeared to be 31% heritable, but shared environment played nearly as big a role at 27% explained. Similarly, social interactions were 32% heritable, with a somewhat smaller shared environment component of 18%.

Drilling down into more specific categories of interest, intellectual functions broadly were highly heritable at 67%, while more specific executive function metrics were 51% heritable with a high 24% shared environment contribution. Mood disorders were highly variable, from bipolar being 68% heritable to depressive episodes being 34% heritable. Height and weight showed 63% heritability, with relatively large 30% and 20% shared environment contributions respectively. The more specific values and social variables were mostly in line with the overall findings. Tendency towards religion and spirituality was 31% heritable with an even larger 35% shared environment component. Basic interpersonal interactions were similar, with 30% heritability but 36% determined by shared environment.

To summarize, basic variables in terms of intelligence, height, and weight are primarily determined by the genetic contributions. Most health and psychiatric outcomes fell somewhere in the middle, but still showed roughly half of variance explained by genetics. Variables relating to fundamental values (e.g. religion, politics) and social interactions (e.g. emotional intelligence, relationships) were by far the most malleable traits, with roughly equal contributions from genetics and shared environment.

What Is Submission?

This weekend, I had a good conversation with a good friend of mine about the meaning of submission. I ended up clarifying my thoughts on the concept quite a bit, in a way that I expect to be useful, despite being pretty abstract and meta.

As long as I can remember, I’ve had pretty negative affect around the concept of submission. In my mind, submission has traditionally been mixed up with fear and shame. I associate it with an authority figure trying to intimidate me and get me to behave a certain way even though it goes against my own intuition about what to do. To submit would be to decide that the other person is scary enough that it would be worth it on my part to lose some status and take action that may not be in my best interest.

I knew that when people would talk about submission or surrender in the context of (among other things) Buddhist philosophy, they were talking something a little different, but I’d never sat down and unpacked what exactly I thought they meant.

My new framing of submission is simply that it happens when a process decides to turn a certain part of its job over to a different process. It’s not all or nothing, and it happens all the time. By this definition, I’m submitting to the timer on my phone when I decided to stop tracking when I put the scones in the over and wait to hear the beep.

Will and I have been talking about the letting go of control. My new best understanding is that it’s nearly impossible to do voluntarily unless I truly believe that something else can handle the task to my satisfaction. Of course we’re sometimes wrong about which process is most effective at handling one job or the other. A common failure mode is to assume that only the verbal loop can be trusted with some task or the other. I’d say that I currently alieve that isn’t safe to experience anger without my verbal loop holding the reins, so to speak. On the other hand, I accept that subconscious processes do fine to figure out when to breathe or not.

Given that we’re often miscalibrated about what we need to “have control over” (which I think usually means inhibit behavior until the verbal loop gives an explicit okay), we can sometimes gain wisdom by forcing the loss of control and seeing what happens. Useful, but that approach has its limits.

For this week, I will meditate on the areas where I feel I am leaving “control” in the hands of a too-narrow mental process and what it would take to trust whatever I think should actually be running the show. And what is actually running the show now. It’s very common for this tight control to be more of an illusion than a reality. Maybe the verbal loop is preventing me from talking much when I’m angry, but the more relevant factor is that my emotions are evident in my body language.

It helps for me to remember my Hayekian heuristics about the failure modes of central planning. The more I can keep things distributed and let whatever has the most information act, the better the system will work.

Feelings and Needs

I’m a huge fan of Nonviolent Communication. I think I forget how much studying it has changed my life, because I take a lot of its lessons mostly for granted these days. I’m pretty good at empathizing, both with myself and others. I’m much better than I used to be at expressing what’s going on with me without mixing in too much narrative (something I was pretty good at even before reading that book).

But I see the NVC basics as a core practice that it serves me well to return to from time to time. A few years back, I memorized literally hundreds of flashcards about NVC. I used lots of lists and sentences from the book, and I also memorized the huge lists of feelings (which I cobbled together into categories myself) and needs from the book.

I remember thinking at the time that I had a very limited emotional vocabulary. I often thought of my emotional state as being either good or upset, though I knew intellectually that nuances existed. I usually had no idea when I was angry. So, I decided to memorize a bunch of words for how I might be feeling, so I could mentally consult an extensive list. And I think I was even less aware of the unmet needs that my feelings might be coming from.

It still kind of surprises me how dramatically I feel a release of tension once I can pinpoint what’s really been bothering me and why.

I think the memorizing worked. I can’t still recite all the feeling and needs in order, which I think I could have actually done once upon a time, but it got internalized, at least a bunch of it did. They stuck around in my brain and sunk in until I found myself using the words in my thoughts and conversations. I recommend trying it, whiling keeping in mind that actually being able to pull them up from memory isn’t quite the point.

I’ve updated my feelings and needs deck (all taken from here), and you can download the deck.

Play around with it. Try some fill in the blanks where you say “I am feeling ____ because my need for ____ is not being met.” Or, “I am feeling ____ because my need for ____ is being met.”

The topic of Anki and self-improvement has been on my mind lately, so expect more posts on the subject in the coming weeks.

Summary of A Guide to the Good Life

A Guide to the Good Life: the Ancient Art of Stoic Joy is a handbook of Stoic philosophy by professor William Irvine. He points out that Stoicism is very different than the stereotypes we have developed about unfeeling robots, and in fact it contains a lot of timeless advice for psychological well being. This is not an academic work of philosophy, it is written as a popular self-improvement book. Though he does discuss a bit of the philosophy and history behind Stoicism, the bulk of the book consists of practical and actionable advice to improve your life. My summary reorganizes the book chapters, with a brief intro in the beginning, followed by all the actionable advice and the author’s personal suggestions, and concluding with a discussion of Stoicism in the modern context and some brief notes on the history of Stoic philosophy.

[Read more…]

Beyond Rationality

I called this post “Beyond Rationality” because I wanted to move past the unfortunate connotations and bad habits associated with the word “rationality” in our culture. With tongue firmly in cheek, Divia and I often refer to the cluster of ideas I am about to present as post-rationality, and you may well encounter us using that very term. But in truth, I don’t see this philosophy as being opposed to rationality in any way. In fact, quite the opposite – I see this as rationality being properly applied. At the end of my last post, I promised to present you with a model of a rationalist human being. Not an ideally rational agent as described by mathematical equations, but how those abstract representations manifest in a living, breathing person. This is my approach to rationality, my philosophy of life, and why I think that rationality is actually an incredibly powerful meme.

Supremacy of the Instrumental over the Epistemic

In the first post in the series I presented my theory that self-described rationalists most often come to these ideas because of an aesthetic preference for truth. They are drawn to epistemic rationality, and that subsequently defines their relationship to these ideas. I found myself in the exact same boat when I first started out, the notion of systematically honing in on true beliefs was the siren’s call that left me immediately hooked. I had to understand these methods and apply them to my own cognition… and this laid the seeds for the triumph of instrumental rationality. [Read more…]

The Promise and Perils of Rationality

In my previous post I laid out what I did and did not mean by the term “rationality”. While I addressed what I consider to be misconceptions around the word rationality and how self-described rationalists would behave, I do think that there are some common problems that real-life rationalists run into in practice. In this post I want to discuss some of what these failure modes are, and what generates them, in the hope of helping others to recognize and avoid them.

The Crusaders

“That which can be destroyed by truth should be.” – P. C. Hodgell

This quote is greatly admired by our rationalist community, as you might expect. Given our aesthetic preference for truth, we want the divine light of evidence to burn away all of the unclean falsehoods that lurk in the unexamined parts of our minds… For those who value truth above all else, this may in fact be the best course of action to apply to their own mind. (The resulting structures formed by this procedure also have an attractive property: that they are robust to reality – revealing known true information cannot damage them, unlike many of the social constructs we pretend exist.)

Our friend Michael Vassar has a great response to this quote: “That’s like saying anything that can be destroyed by lions should be.” [Read more…]

Rationality, Unpacked

The word “rationality” carries a lot of historical baggage and cultural misconceptions, enough so that I have considered not using it at all. Yet a substantial portion of my social circle has decided to adopt this label (spoiler alert!), and for better or worse, it is the label that I use in my own mind. First I am going to address what rationality is not, before talking about this definition of rationality and why we should care about it.

Cartesian Rationality and Axiomatic Systems

The first widespread use of rationalism was a philosophy espoused by Descartes back in the 17th century. In this sense, the opposite of rationalism was empiricism. Rationalism as a philosophy in its extremest form holds that the only source of knowledge or justification is through our own reason. Descartes himself tried to derive all of the “eternal truths” of mathematics, epistemology, and metaphysics through the single starting assumption of cogito ergo sum – I think, therefore I am.

While not every thinker believes that reason is the only source of knowledge, it does have the connotations of conscious deliberation being the primary source of knowledge, or morality, or action. Even a rudimentary reading of cognitive science clearly shows that our brain is a massively-paralleled and mostly unconscious processing machine, with a very small deliberation module attached on top (and particularly connected with verbal processing). Anyone hoping to utilize their reasoning needs to understand where it comes from and what purpose it serves, to avoid deluding themselves and going horribly wrong. [Read more…]

Your Inner Virtue Ethicist Should Like Self-Compassion

If you haven’t read Virtue Ethics for Consequentialists, I highly recommend it. As I see it, consequentialism is obviously correct, and virtue ethics is how you implement it on human hardware.

I’m also a big fan of self-compassion. Today, I was working with someone, and while we did some good IFS work together, we didn’t manage to wrap things up in a nice little bow at the end of the session. That happens sometimes. So, I recommended an operant conditioning exercise to work on in the meantime.

Basically, imagine the situation that was triggering her and feel compassion. Practice this enough times, and it gets easier to feel compassion in the real situation.

She said, that her inner virtue ethicist objected, because it seemed like rewarding herself for undesirable behavior. That’s not how I see it at all, so it seemed worth it to me to write up my reasons for seeing it differently.

Compassion is a game-theoretic hack.

Or something. 

Many people end up using some sort of internal system where they aim to feel good when they do something aligned with their moral system, and bad when they do something that isn’t aligned with their moral system. This is a relatively intuitive way to set things up. And compassion is generally experienced by people as positive, so I could see why it might seem backwards to “reward” yourself with compassion for, say, feeling resentful.

But here’s how compassion is a hack. Yes, compassion feels good. But in order to feel self-compassion, I have to be updating. As humans, we tend to store data that tells us that the world is worse than we realize. We cordon it off and prevent ourselves from looking at it. If we were to take it out and look at it, we would get sad. And if you look at your sadness from the right angle, you get self-compassion.

Compassion lets you feel good while updating your model of the world to be accurate even when you’re getting bad news. But you’re not going to start doing the undesirable behavior in order to get more compassion, because feeling the self-compassion at all requires that you be in exactly the sort of observer state that means you’ll be updating as you feel it. If you take in the compassion, you’ll accept the world as it is,  feel better about yourself, and then not have the emotional impetus to do the problematic behavior anymore.

(I’m somewhat worried that this post isn’t particularly articulate, as this isn’t a concept I’ve tried to put into words very often, but it seemed worth trying. I may revisit this topic later.)