I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskillās The Case for Strong Longtermism basically argues this. Ordās The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
I havenāt read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if ālongtermism is correctā, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the ābackfiresā would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they wouldāve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and Iād guess that usually thereās a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
A question I think is useful is āLetās say we have a random person of the kind who might be inclined towards EA. Letās say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention theyād work on, or how theyād approach it. On your all-things-considered moral and empirical viewsānot just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?ā
Iām 95+% confident theyād say āNoā (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a personās personal fit and what specific intervention theyāre considering. But learning about IBCs doesnāt inform us on those details.)
Iād be interested in your thoughts on this take (including whether you think Iām just sort-of talking past your point, or that I really really should just read Greaves and MacAskillās paper!).
Greaves and MacAskill donāt cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isnāt as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Slight aside about the Greaves and MacAskill paperāI personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. Itās quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. Iām sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Yeah, Iām definitely on the same page on those points!
So overall I would recommend giving it a read!
Ok, this has made it more likely that Iāll make time for reading the paper in the coming weeks. Thanks :)
I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I havenāt read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if ālongtermism is correctā, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the ābackfiresā would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they wouldāve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
See also Cotton-Barrattās statements about āSafeguarding against naive utilitarianismā
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and Iād guess that usually thereās a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
See also the idea of sharing independent impressions
A question I think is useful is āLetās say we have a random person of the kind who might be inclined towards EA. Letās say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention theyād work on, or how theyād approach it. On your all-things-considered moral and empirical viewsānot just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?ā
Iām 95+% confident theyād say āNoā (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a personās personal fit and what specific intervention theyāre considering. But learning about IBCs doesnāt inform us on those details.)
Iād be interested in your thoughts on this take (including whether you think Iām just sort-of talking past your point, or that I really really should just read Greaves and MacAskillās paper!).
Thanks for this.
Greaves and MacAskill donāt cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isnāt as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and Iām quite excited about this possibility.
Slight aside about the Greaves and MacAskill paperāI personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. Itās quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. Iām sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
Yeah, Iām definitely on the same page on those points!
Ok, this has made it more likely that Iāll make time for reading the paper in the coming weeks. Thanks :)