There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/âcost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
I tentatively believe (ii), depending on some definitions. Iâm somewhat surprised to see Ben and Darius implying itâs a really weird view, and makes me wonder what Iâm missing.
I donât want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I donât mean indirect effects more broadly in the sense of âbetter health in poor countriesâ --> âmore economic growthâ --> âmore innovationâ)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the communityâs skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though Iâm not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because itâs creating a community and culture of founding impact-oriented nonprofits, not because [itâs better for shrimp/âthereâs less lead in paint/âfewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
Iâm not sure what counts as âastronomicallyâ more cost effective, but if it means ~1000x more important/âcost-effective I might agree with (ii). Itâs hard to come up with a good thought experiment here to test this intuition.
One hypothetical is âwould you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Wellâs Maximum Impact Fundâ. This is confusing though, because Iâm not sure how important extra funding is in these areas. Another hypothetical is âwould you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)â. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people donât hold the view I do is some combination of (1) âthis feels weird so maybe itâs wrongâ and (2) âI donât want to be unkind to people working on neartermist causesâ.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, Iâm not sure how much longtermism actually falls into this category.
The idea is not that new, and thereâs been quite a lot of energy devoted to criticising the ideas. I donât know what others in this thread think, but I havenât found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesnât imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we canât get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/âmost kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldnât prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people donât spend any resources on that at all. (This is similar to Eliezerâs point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldnât make people who work on other causes feel bad. However, I think itâs possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I donât think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when itâs weird, or it feels difficult, or weâre not completely sure. We make tradeoffs even when it feels really hardâlike working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I donât try to get everyone I talk to to work on longtermist things. I donât think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and Iâm eager to hear Dariusâ, Benâs, and others views on this
Iâm not sure what counts as âastronomicallyâ more cost effective, but if it means ~1000x more important/âcost-effective I might agree with (ii).
This may be the cruxâI would not count a ~ 1000x multiplier as anywhere near âastronomicalâ and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term âastronomicalâ is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
No, we probably donât. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Occasionally there are even claims [among effective altruists] to the effect that âshaping the far future is 1030 times more important than working on present-day issues,â based on a naive comparison of the number of lives that exist now to the number that might exist in the future.
I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (âŠ) Note that these are arguments about ex ante expected value, not necessarily actual impact. (âŠ) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.
When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.
It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.
...
Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far âupstreamâ, e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?
Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, âThe utilitarian imperative âMaximize expected aggregate utility!ââ might not really, as Bostrom (2002) puts it, âbe simplified to the maxim âMinimize existential riskââ.
For the record Iâm not really sure about 1030 times, but Iâm open 1000s of times.
And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesnât necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negativeâIâm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If Iâm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although itâs sort of like comparing 1030 to undefined so it does get a bit weird...).
the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/âcost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
I think I believe (ii), but itâs complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So itâs pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).
Please see my above response to jackmaldeâs comment. While I understand and respect your argument, I donât think we are justified in placing high confidence in this model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldnât place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion).
Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.
Iâm unwilling to pin this entirely on the epistemic uncertainty, and specifically donât think everyone agrees that, for example, interventions targeting AI safety arenât the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/âcost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
Basically, in this context the same points apply that Brian Tomasik made in his essay âWhy Charities Usually Donât Differ Astronomically in Expected Cost-Effectivenessâ (https://ââreducing-suffering.org/ââwhy-charities-dont-differ-astronomically-in-cost-effectiveness/ââ)
I tentatively believe (ii), depending on some definitions. Iâm somewhat surprised to see Ben and Darius implying itâs a really weird view, and makes me wonder what Iâm missing.
I donât want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I donât mean indirect effects more broadly in the sense of âbetter health in poor countriesâ --> âmore economic growthâ --> âmore innovationâ)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the communityâs skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though Iâm not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because itâs creating a community and culture of founding impact-oriented nonprofits, not because [itâs better for shrimp/âthereâs less lead in paint/âfewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
Iâm not sure what counts as âastronomicallyâ more cost effective, but if it means ~1000x more important/âcost-effective I might agree with (ii). Itâs hard to come up with a good thought experiment here to test this intuition.
One hypothetical is âwould you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Wellâs Maximum Impact Fundâ. This is confusing though, because Iâm not sure how important extra funding is in these areas. Another hypothetical is âwould you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)â. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people donât hold the view I do is some combination of (1) âthis feels weird so maybe itâs wrongâ and (2) âI donât want to be unkind to people working on neartermist causesâ.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, Iâm not sure how much longtermism actually falls into this category.
The idea is not that new, and thereâs been quite a lot of energy devoted to criticising the ideas. I donât know what others in this thread think, but I havenât found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesnât imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we canât get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/âmost kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldnât prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people donât spend any resources on that at all. (This is similar to Eliezerâs point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldnât make people who work on other causes feel bad. However, I think itâs possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I donât think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when itâs weird, or it feels difficult, or weâre not completely sure. We make tradeoffs even when it feels really hardâlike working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I donât try to get everyone I talk to to work on longtermist things. I donât think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and Iâm eager to hear Dariusâ, Benâs, and others views on this
This may be the cruxâI would not count a ~ 1000x multiplier as anywhere near âastronomicalâ and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term âastronomicalâ is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and donât we pretty much get to (ii)?
No, we probably donât. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:
Phil Trammellâs point in Which World Gets Saved is also relevant:
For the record Iâm not really sure about 1030 times, but Iâm open 1000s of times.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesnât necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negativeâIâm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If Iâm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although itâs sort of like comparing 1030 to undefined so it does get a bit weird...).
Does that make any sense?
I think I believe (ii), but itâs complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So itâs pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).
Please see my above response to jackmaldeâs comment. While I understand and respect your argument, I donât think we are justified in placing high confidence in this model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldnât place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion).
Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.
Iâm unwilling to pin this entirely on the epistemic uncertainty, and specifically donât think everyone agrees that, for example, interventions targeting AI safety arenât the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).