One thing I noticed yesterday is that EA discussions are often well suited for leaving oneself a line of retreat. If you know what the horse gait called the pace looks like, then it’s almost the same, only conceptually: All your left feet are your personal qualities and motivations, and all your right feet are your epistemic beliefs.
(When I use words like “admit,” I mean it from the perspective of the actor in the following examples. I don’t mean to imply that it’s right for them to update, just that it’s rational for them to update given the information they have at the time. See also this question and answer for the distinction.)
A rationalist who loves meat too much can either brutalize their worldview to make themselves believe that the probability that animals can suffer is negligible (a standstill) or can admit that they act morally inconsistent but that it would take them much more willpower than others to change it (and maybe they can cut out chicken and eggs, and offset the rest). They’ve put their right feet forward. Now they are less afraid of talking with vegans about veganism, and so get introduced to some great fortified veggie meats, so that, a few months later, they can also put their left feet forward more easily.
Or an animal rights activist who is very invested in the movement learns about AI risks and runs out of arguments why AR values spreading should be more important than friendly AI research. They can either ridicule AI researchers for their weirdness (the standstill), or admit that the cause area is the more cost-effective but that they’re personally so specifically skilled and highly motivated for AR that they have a much greater fit for AR, so that someone with a comparative advantage for AI research can take that position. They’ve put their right feet forward. Being less afraid of talking with AI researchers, they can now also personally warm up to the cause (putting the left legs forward) and influence the researchers to care more about nonhuman animals and thus increase the chances that a future superintelligent AI will.
I like the addition of the more complex way of thinking about the line of retreat. I didn’t go into this in the article, but indeed, leaving a line of retreat permits a series of iterating collaborative truth-seeking conversations, so as to update beliefs incrementally.
Great article!
One thing I noticed yesterday is that EA discussions are often well suited for leaving oneself a line of retreat. If you know what the horse gait called the pace looks like, then it’s almost the same, only conceptually: All your left feet are your personal qualities and motivations, and all your right feet are your epistemic beliefs.
(When I use words like “admit,” I mean it from the perspective of the actor in the following examples. I don’t mean to imply that it’s right for them to update, just that it’s rational for them to update given the information they have at the time. See also this question and answer for the distinction.)
A rationalist who loves meat too much can either brutalize their worldview to make themselves believe that the probability that animals can suffer is negligible (a standstill) or can admit that they act morally inconsistent but that it would take them much more willpower than others to change it (and maybe they can cut out chicken and eggs, and offset the rest). They’ve put their right feet forward. Now they are less afraid of talking with vegans about veganism, and so get introduced to some great fortified veggie meats, so that, a few months later, they can also put their left feet forward more easily.
Or an animal rights activist who is very invested in the movement learns about AI risks and runs out of arguments why AR values spreading should be more important than friendly AI research. They can either ridicule AI researchers for their weirdness (the standstill), or admit that the cause area is the more cost-effective but that they’re personally so specifically skilled and highly motivated for AR that they have a much greater fit for AR, so that someone with a comparative advantage for AI research can take that position. They’ve put their right feet forward. Being less afraid of talking with AI researchers, they can now also personally warm up to the cause (putting the left legs forward) and influence the researchers to care more about nonhuman animals and thus increase the chances that a future superintelligent AI will.
I like the addition of the more complex way of thinking about the line of retreat. I didn’t go into this in the article, but indeed, leaving a line of retreat permits a series of iterating collaborative truth-seeking conversations, so as to update beliefs incrementally.