I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:
Motivated reasoning feels like a huge concern for longtermism.
First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ethically. I started eating animal products more, and told myself it was the right decision because not doing so depleted my willpower and left me with less energy to do higher impact stuff. Instead, I decided, I’d offset through donations. Similar thing when I was asked, face to face, to donate to some non-EA cause: I wanted to save my money for more effective giving. I was shorter with people because I had important work I could be doing, etc., etc.
What I realized when I looked harder at my behavior was that I had never thought critically about most of these “trade-offs,” not even to check whether they were actually trade-offs! I was using consequentialism as a license to do whatever I wanted to do anyway, and it was easy to do that because it’s harder for every day consequentialist decisions to be obviously incorrect, the way deontological ones can be. Hand-wavey, “directionally correct” answers were just fine. It just so happened that nearly all of my rough cost-benefit analyses turned up the answers I wanted to hear.
I see a similar issue taking root in the longtermist community: It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that). My perception is that “a small chance X helps” is being invoked not infrequently to justify doing whatever work the invoker wanted to do anyway, and to excuse them internally from trying to figure out impact relative to other available options.
Longtermism puts an arbitrarily heavy weight on one side of the scales, so things look pretty similar no matter what you’re comparing it to. (Speaking loosely here: longtermism isn’t one thing, not all people are doing this, etc. etc.) Having the load-bearing component of a cost-benefit analysis be effectively impossible to calculate is a huge downside if you’re concerned about “motivational creep,” even if there isn’t a better way to do that kind of work.
I see this as an even bigger issue because, as I perceive it, the leading proponents of longtermism are also sort of the patron saints of EA generally: Will MacAskill, Toby Ord, etc. Again, the issue isn’t that those people are wrong about the merits of longtermism — I don’t think that — it’s that motivated reasoning is that much easier when your argument pattern-matches to one they’ve endorsed. I’m not sure if the model of EA as having a “culture of dissent” is accurate in the first place, but if so it seems to break down around certain people and certain fashionable arguments/topics.
It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that).
I have seen something like this happen, so I’m not claiming it doesn’t, but it feels pretty confusing to me. The logic pretty clearly doesn’t hold up. Even if you accept that “very good future” is all that matters, you still need to optimize for the action that most increases the probability of a very good future, and that’s still a hard question, and you can’t just end the inquiry with this line of thinking.
Yeah I’m surprised by this as well. Both classical utilitarianism (in the extreme version, “everything that is not morally obligatory is forbidden”) and longtermism just seem to have many lower degrees of freedom than other commonly espoused ethical systems, so it would naively be surprising if these worldviews can justify a broader range of actions than close alternatives.
I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:
Motivated reasoning feels like a huge concern for longtermism.
First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ethically. I started eating animal products more, and told myself it was the right decision because not doing so depleted my willpower and left me with less energy to do higher impact stuff. Instead, I decided, I’d offset through donations. Similar thing when I was asked, face to face, to donate to some non-EA cause: I wanted to save my money for more effective giving. I was shorter with people because I had important work I could be doing, etc., etc.
What I realized when I looked harder at my behavior was that I had never thought critically about most of these “trade-offs,” not even to check whether they were actually trade-offs! I was using consequentialism as a license to do whatever I wanted to do anyway, and it was easy to do that because it’s harder for every day consequentialist decisions to be obviously incorrect, the way deontological ones can be. Hand-wavey, “directionally correct” answers were just fine. It just so happened that nearly all of my rough cost-benefit analyses turned up the answers I wanted to hear.
I see a similar issue taking root in the longtermist community: It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that). My perception is that “a small chance X helps” is being invoked not infrequently to justify doing whatever work the invoker wanted to do anyway, and to excuse them internally from trying to figure out impact relative to other available options.
Longtermism puts an arbitrarily heavy weight on one side of the scales, so things look pretty similar no matter what you’re comparing it to. (Speaking loosely here: longtermism isn’t one thing, not all people are doing this, etc. etc.) Having the load-bearing component of a cost-benefit analysis be effectively impossible to calculate is a huge downside if you’re concerned about “motivational creep,” even if there isn’t a better way to do that kind of work.
I see this as an even bigger issue because, as I perceive it, the leading proponents of longtermism are also sort of the patron saints of EA generally: Will MacAskill, Toby Ord, etc. Again, the issue isn’t that those people are wrong about the merits of longtermism — I don’t think that — it’s that motivated reasoning is that much easier when your argument pattern-matches to one they’ve endorsed. I’m not sure if the model of EA as having a “culture of dissent” is accurate in the first place, but if so it seems to break down around certain people and certain fashionable arguments/topics.
I have seen something like this happen, so I’m not claiming it doesn’t, but it feels pretty confusing to me. The logic pretty clearly doesn’t hold up. Even if you accept that “very good future” is all that matters, you still need to optimize for the action that most increases the probability of a very good future, and that’s still a hard question, and you can’t just end the inquiry with this line of thinking.
Yeah I’m surprised by this as well. Both classical utilitarianism (in the extreme version, “everything that is not morally obligatory is forbidden”) and longtermism just seem to have many lower degrees of freedom than other commonly espoused ethical systems, so it would naively be surprising if these worldviews can justify a broader range of actions than close alternatives.