Me: “I don’t know what I should do next? Lots of things seem good, but I can’t think of particularly good reasons for them to cash out to what I care about.”
Hallucinated frankenstein interlocutor: “People in your position tend to undervalue judgement and gut thinking relative to explicit reasoning, you should go with what your gut says is good.”
Me: “I live inside my head and I can definitely say that much of the time my gut is saying what it thinks high status people would want to hear or thinking about the last three examples and none before or is taking into account at most two out of the whole space of considerations, why am I trusting it again?”
HFI: “Sure, but if you just go with explicit reasoning, you’ll miss a bunch of unquantifiable intuitions that contain a lot of data, and those are valuable.”
Me: “Yes, but again from inside my head I need you to hear me when I say my gut is like the people who answer “should there be homosexuals in the military” differently from “should there be gays or lesbians in the military” and I don’t think I’m like unusual, this is like the whole point of the rationality project.”
This is a kind of conversation I have a lot (if this confuses you, see Appendix 1), which at times has left me feeling like maybe I’m the only person in EA/rationality who thinks they have a brain that makes mistakes by default, that can’t be trusted to make good decisions without actually thinking about them. This is definitely not the case!
There are a bunch of ways in which HFI and I above are not understanding each other (Appendix 2), but one is that, like half of a cow, there are two guts in play.
Sometimes people in the world are like “trust your gut instinct about people” and they mean “trust your split-second first impression.” And sometimes they’re like “in the end, you have to go with your gut” and they mean “after marinating in all the relevant ideas, thinking for a long time, sleeping on it, probably doing a lot of explicit reasoning, if you have a deep sense of what you should do, it’s worth trusting.”
And I often heard people as saying “trust your snap gut”, and I was like “my snap gut judgment is that that is insane. Do you know that I don’t know anything about this.” But at least in one case, and likely others, they meant “trust your reflective gut”, the gut that’s had time to sit with everything and digest it (the metaphor pays dividends).
[They might also mean that there’s a valuable exchange to be had between gut and explicit models, where your explicit reasoning informs your intuition and vice versa, and you can interrogate your sense of what’s reasonable or valuable to mine it for interesting information or to find evidence that it’s based on what Melissa in third grade told you one time and maybe it’s time to let go of the notion that every fifth American highway mile is perfectly straight for planes to land in wartime.] (More in Appendix 3)
And sometimes I think they are saying “you’ve ingested more in this category than you give yourself credit for” and/or “you’ve had more time to digest this than you give yourself credit for”.
But at least this is something I can make sense of, because I know my snap judgment changes based on what tweet I read right before you asked me, but I also know that good judgment is of deep, deep value, since we are making decisions all the time about what to do and how to act, and we don’t have time to do all of that explicitly (Though, see Appendix 4).
Appendix 1:
Sometimes people aren’t so much talking as trying to win the last argument they were in. If you’re like “what? EA’s *love* explicit reasoning” you’re both right and you’ll have to take my word for it that there are subsections where indeed I’m under the impression (mistaken or not) that I need to fight on the margin for the glory of explicit reasoning. I find a lot of human behavior is more comprehensible if you adopt the frame in which they are reacting on the margin, perceiving themselves as a valiant minority faced all around by a stronger enemy on their pet thing. I’m here being like “explicit reasoning!” in a sea of “hone your judgment” in a larger continent of “explicit reasoning!” in a larger ocean of something else, so everyone gets to feel very brave.
Appendix 2:
I think HFI often doesn’t take my own sense of my weaknesses seriously, which is ironic because it’s my carefully considered inside view based on being myself for a while
It feels like surely it matters how good the judgment of the people involved is and I don’t always feel like people are assessing me in particular to say “yes, Chana, you have good judgment” so I’m in this awkward position where I feel like I have to be like “just fyi, I think I’m worse than whatever the median person you’re talking to about this is maybe?”
Some people’s judgment is bad, yo!
There are other good reasons to go with your judgment call, perhaps to test it and help it be better in the future (though I think sometimes you can do this by not going with your judgment but noting that you did so and checking later)
I think HFI could help me out by noting specific examples in which they were happy they went with their gut judgment over explicit reasoning (though often the reason they were happy is not very comprehensible to me in concrete terms precisely because of the nature of the topic here)
Appendix 3:
Figuring out why you believe what you believe I think is a great exercise (for which Focusing might be helpful), where you try to access the actual reasons your brain came to output this thing, and some of them are going to be good and some useless and then you have a better sense of what to hold on to. (Though sometimes it will also be very unclear!)
And when you look at an argument and it makes sense to you and you see the logic, you’ll sometimes feel your gut sense change, because you have internalized and taken seriously what it means and what it points to.
Appendix 4:
I feel like I repeated that paragraph a bit because it feels virtuous in the minds of people who think “judgment is good.” In actuality, I think you could reflect once a year on what your core goals are, check in every quarter on whether you’re aimed at them correctly, and every week on whether you’re moving towards them, in such a way that you’re mostly working off of very intense explicit reasoning you did at some point, even if in each moment you’re trusting your past self. Would be weird if judgment just didn’t matter here, though, and the whole point of this is that “deep thought” and “good judgment” aren’t in tension.
Appendix 5:
For what it’s worth, I want to stand up for not always having a judgment or a gut sense, or knowing what it is, and that not necessarily being some horrifying pathology where you’ve excised your humanity in service of the Numbers God but instead just what it can feel like to be uncertain or to have instincts that are pretty reliably warped by something or other (e.g. mental illness).
Well, there’s a difference between:
sorting: prioritizing two out of a larger number of considerations.
filtering: only considering two out of a larger number of considerations.
Humans use cognitive aids (visuals, charts, summaries, recordings) to help them with stuff like:
memory: of lists/hierarchies
calculations: of numbers
representations: of data or information
That kind of information feeds into filtering and sorting processes and can alter their results.
Otherwise, beliefs filter and priorities sort.
Cognitive aids boost your decision-making but only because otherwise we’d rely on:
faulty memory
incorrect or absent calculations
inadequate representations
The bottom line on explicit reasoning is that, even when done correctly, it can suffer from missing premises. Whatever the reason that the premises are missing, the conclusions are altered no matter what information would be helpful input to the conclusion. Cognitive aids and a knowledgebase can help with that problem.
I believe that in most circumstances internal feelings should have a general correspondence to the information we receive about the state of the world and oneself in it, but we are cognitively flexible and have fallible defaults.
What you see, hear, or feel internally might reinforce a single conclusion or conflict with it or conflict with each other (for example, when what you tell yourself to do is not what you feel like doing and you don’t know what to conclude).
Marketers use mental associations and different sensory representations to convey or prioritize conflicting information. They take advantage of our fallible defaults.
You have probably noticed that in pharma drug commercials:
verbal information: a narrator (usually with cheery or flat intonation) comfortably rattles off how some lousy drug could kill you or make a limb fall off in between a discussion of the drug benefits and a suggestion to ask your doctor about it
visual information: the visuals show you a person suffering some ailment but then getting the drug and suddenly their life changing to one involving slow-motion smiling, socializing, dating, or walking their dog or playing with their grandkids
You don’t see any visuals of a person dying from using the drug or losing a limb from it. Supplying that would lead your feelings away from the commercial’s goals.
You know how choice of sensory system changes your perception of the information.
Most people claim that commercials don’t work on them. But they do.
You can be betrayed by (among other things):
mental associations
distracted attention
fallible default cognitive processing
Obviously, just trusting your gut might not always be the best thing, but sometimes doing so does just what you think, it provides information that is not already available in your explicit reasoning.
I favor use of cognitive aids to compensate for memory, calculation, or representation problems, but they don’t always help with internal conflicts or motivated reasoning[1]. An assistant can help lead you through a thought process that, when you are faced with fatigue, becomes too difficult to do alone. That last idea is a gem, actually, but people don’t do it much.
I like Julia Galef’s work in that area, an entire book devoted to the problem of motivated reasoning shows a good awareness of a major problem.