I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).
This may be the crux—I would not count a ~ 1000x multiplier as anywhere near “astronomical” and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term “astronomical” is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
This may be the crux—I would not count a ~ 1000x multiplier as anywhere near “astronomical” and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term “astronomical” is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.