Last week I posted as my Facebook status that: “Someone else’s emotional state is almost always a terrible optimization target.” I got a couple of requests to expand the thought to the length of a blog post, so here I go!
I’ve been thinking since I posted it, and I think there are two fairly separable reasons that I think the mental representation of another person in a particular emotional state isn’t a productive thing to focus my mind on. First, how other people feel isn’t something I can control. And second, it’s a metric that is easy and harmful to game.
Let me start by saying how I think intentionality works in the human mind. (These ideas are not original to me, but I also haven’t ever heard anyone articulate them in quite the way I do.) Intentions are represented as pictures of how we want the world to be. I think they usually have more influence over our behavior when they’re represented vividly. Pictures are almost always involved, words may be involved, feelings in our body are often involved. Smells and tastes may be involved too. They have size, color, and position. They may be moving or static.
And ultimately, we try to make these pictures come true. Of course it’s somewhat more complicated than that. And we’re certainly imperfect at fulfilling our intentions, but that’s the basic idea. I won’t link to The Secret because that seems embarrassing and I haven’t actually read it, but I think that book says the same thing in more woo-woo language.
To break it down more, I think correspondence between our mental representations and our perceptions is reinforcing, and more so for the more vivid ones that are represented in many modalities. So approaching intentions shapes our behavior.
And it’s worth saying that the intentions that have most powerfully shaped our behavior aren’t necessarily ones that are aligned with our explicit goals. As the saying goes, “we’d rather be right than happy.”
So, if I hold in my mind, say, a picture of Will being happy, the above process starts rolling and my behavior will change. I see it as somewhat problematic to set intentions about things that are outside my control. If you’ve heard of SMART goals, my objection should be familiar. If nothing else, it’s demoralizing to work that way. I think we learn by having a bunch of mental processes operantly condition a bunch of other mental processes. The best way to make progress is to reward yourself step by step as you act in ways that are closer and closer to how you want to be acting.
Here’s a link to support my point, noting that reinforcement works better when we can target behaviors the subject can control and offer voluntarily. “Will being happy” is neither.
But the more insidious and harmful issue is that “Will is smiling” is easier to game than “Will is momentarily happy” is easier to game than “Will is in a lasting way actually more fulfilled with his life.”
I almost certainly actually care most about the last one. But I don’t have a tight feedback loop for that. Behaviors of mine that are most likely to affect that one may be pretty far afield from me picturing his smiling face. Psychology is tricky. Doing lots of little nice things for Will might make him happier, for example, but it’s less likely to work if he gets the sense that I’m desperate about it working. If, instead, I do a bunch of nice things for Will because I’ve decided in some closer-to-deontological sense that it would be a good idea and remain unattached to the outcome, I’ll be vastly more open to feedback. It will work better.
There’s more detail I could go into, but I think I’ve hit on the main points. I like trying to unpack wise statements that are obviously not totally true but that are usually excellent advice. Things people say about not being able to control others seem to be in that category, and I think detailed explanations of the mechanics of why are rare and useful. I like to try to provide them.
Leave a Reply