I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I actually do think that many prominent EAs, say Toby Ord, Will MacAskill and Hilary Greaves, would argue that longtermist areas are astronomically better than shorttermist areas in expected value terms. Greaves and MacAskill’s The Case for Strong Longtermism basically argues this. Ord’s The Precipice basically argues this, but specifically from an x-risk perspective. It might be that longtermism is the only case where prominent thinkers in the movement do think there is a clear argument to be made for astronomically different expected value.
I haven’t read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if “longtermism is correct”, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the “backfires” would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they would’ve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and I’d guess that usually there’s a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
A question I think is useful is “Let’s say we have a random person of the kind who might be inclined towards EA. Let’s say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention they’d work on, or how they’d approach it. On your all-things-considered moral and empirical views—not just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?”
I’m 95+% confident they’d say “No” (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a person’s personal fit and what specific intervention they’re considering. But learning about IBCs doesn’t inform us on those details.)
I’d be interested in your thoughts on this take (including whether you think I’m just sort-of talking past your point, or that I really really should just read Greaves and MacAskill’s paper!).
Greaves and MacAskill don’t cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isn’t as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and I’m quite excited about this possibility.
Slight aside about the Greaves and MacAskill paper—I personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. It’s quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. I’m sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and I’m quite excited about this possibility.
Yeah, I’m definitely on the same page on those points!
So overall I would recommend giving it a read!
Ok, this has made it more likely that I’ll make time for reading the paper in the coming weeks. Thanks :)
I think you make a bunch of interesting points. I continue to agree with the general thrust of what you propose, though disagreeing on parts.
I haven’t read that key paper from Greaves & MacAskill. I probably should. But some complexities that seem worth noting are that:
The longtermist interventions we think are best usually have a nontrivial chance of being net harmful
It seems plausible to me that, if “longtermism is correct”, then longtermist interventions that are actually net harmful will tend to be more harmful than neartermist interventions that are actually net harmful
This is basically because the “backfires” would be more connected to key domains, orgs, decisions, etc.
Which has various consequences, such as the chance of creating confusion on key questions, causing longtermism-motivated people to move towards career paths that are less good than those they would’ve gone towards, burning bridges (e.g., with non-Western governments), or creating reputational risks (seeming naive, annoying, etc.)
See also Cotton-Barratt’s statements about “Safeguarding against naive utilitarianism”
Even longtermist interventions that are typically very positive in expectation could be very negative in expectation if done in terrible ways by very ill-suited people
Neartermist interventions will also have some longtermist implications, and I’d guess that usually there’s a nontrivial chance that they have extremely good longtermist implications
E.g., the interventions probably have a nontrivial chance of meaningfully increasing or decreasing economic growth, technological progress, or moral circle expansion, which in turn is plausibly very good or very bad from a longtermist perspective
Related to the above point: In some cases, people might actually do very similar tasks whether they prioritise one cause area or another (specific) cause area
E.g., I think work towards moral circle expansion is plausibly a top priority from a longtermist perspective, and working on factory farming is plausibly a top priority way to advance that goal (though I think both claims are unlikely to be true). And I think that moral circle expansion and factory farming are also plausibly top priorities from a neartermist perspective.
Greaves, MacAskill, and Ord might be partly presenting a line of argument that they give substantial credence to, without constantly caveating it for epistemic humility in the way that they might if actually making a decision
See also the idea of sharing independent impressions
A question I think is useful is “Let’s say we have a random person of the kind who might be inclined towards EA. Let’s say we could assign them to work on a randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a longtermist perspective, or to work on randomly chosen intervention out of the set of interventions that such a person might think is a good idea from a neartermist animal welfare perspective. We have no other info about this person, which intervention they’d work on, or how they’d approach it. On your all-things-considered moral and empirical views—not just your independent impressions - it >1,000,000 times as good to assign this person to the randomly chosen longtermist intervention than the randomly chosen neartermist animal welfare intervention?”
I’m 95+% confident they’d say “No” (at least if I made salient to them the above points and ensured they understood what I meant by the question).
(I think expected impact differing by factors of 10 or 100s is a lot more plausible. And I think larger differences in expected impact are more plausible once we fill in more details about a specific situation, like a person’s personal fit and what specific intervention they’re considering. But learning about IBCs doesn’t inform us on those details.)
I’d be interested in your thoughts on this take (including whether you think I’m just sort-of talking past your point, or that I really really should just read Greaves and MacAskill’s paper!).
Thanks for this.
Greaves and MacAskill don’t cover concerns about potential downsides of longtermist interventions in their paper. I think they implicitly make a few assumptions, such as that someone pursuing the interventions they mention would actually do them thoughtfully and carefully. I do agree that one can probably go into say DeepMind without really knowing their stuff and end up doing astronomical harm.
Overall I think your general point is fair. When it comes to allocating a specific person to a cause area, the difference in expected value across cause areas probably isn’t as large as I originally thought, for example due to considerations such as personal fit. Generally I think your comments have updated me away from my original claim that everyone should know all IBCs, but I do still feel fairly positive about more content being produced to improve understanding of some of these ideas and I’m quite excited about this possibility.
Slight aside about the Greaves and MacAskill paper—I personally found it a very useful paper that helped me understand the longtermism claim in a slightly more formal way than say an 80K blog post. It’s quite an accessible paper. I also found the (somewhat limited) discussion about the potential robustness of longtermism to different views very interesting. I’m sure Greaves and MacAskill will be strengthening that argument in the future. So overall I would recommend giving it a read!
Yeah, I’m definitely on the same page on those points!
Ok, this has made it more likely that I’ll make time for reading the paper in the coming weeks. Thanks :)