No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.
You say:
the “strong longtermism” camp, typified by Toby Ord and Will MacAskill, who seem to imply that Effective Altruism should focus entirely on longtermism.
They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what’s best at the margin; they say in footnote 14 they’re unsure whether strong longtermism would be justified if more resources were already spent on longtermism.)
In The Precipice – which is more intended that way—Toby is clear that he thinks existential risk should be seen as “a” key global priority, rather than “the only” priority.
He also suggests the rough target of spending 0.1% of GDP on reducing existential risk, which is quite a bit less than 100%.
And he’s clearly supported other issues with his life.
Will is taking a similar approach in his new book about longtermism.
Even the most longtermist members of effective altruism typically think we should allocate about 20% of resources to neartermist efforts. No-one says longtermist causes are astronomically more impactful.
I think there are bunch of ways we could make this weakening more precise, whether that’s defining a weaker form of longtermism, worldview diversification, moral uncertainty etc. and that’s an interesting discussion to be had. But I think it’s important to start by pointing out that as far as I’m aware no key researchers hold the position you’re contrasting against.
There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).
This may be the crux—I would not count a ~ 1000x multiplier as anywhere near “astronomical” and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term “astronomical” is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Occasionally there are even claims [among effective altruists] to the effect that “shaping the far future is 1030 times more important than working on present-day issues,” based on a naive comparison of the number of lives that exist now to the number that might exist in the future.
I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
It would require razor-thin exactness to keep the expected impact on the future of one set of actions 1030 times lower than the expected impact of some other set of actions. (…) Note that these are arguments about ex ante expected value, not necessarily actual impact. (…) Suggesting that one charity is astronomically more important than another assumes a model in which cross-pollination effects are negligible.
When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing.
It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.
...
Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far “upstream”, e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?
Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, “The utilitarian imperative ‘Maximize expected aggregate utility!‘” might not really, as Bostrom (2002) puts it, “be simplified to the maxim ‘Minimize existential risk’”.
For the record I’m not really sure about 1030 times, but I’m open 1000s of times.
And differences of 1030 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn’t necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative—I’m just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If I’m utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it’s sort of like comparing 1030 to undefined so it does get a bit weird...).
the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
I think I believe (ii), but it’s complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it’s pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).
Please see my above response to jackmalde’s comment. While I understand and respect your argument, I don’t think we are justified in placing high confidence in this model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn’t place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion).
Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.
I’m unwilling to pin this entirely on the epistemic uncertainty, and specifically don’t think everyone agrees that, for example, interventions targeting AI safety aren’t the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That’s rather a long way from 10^23 times more impactful.
Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.
Indeed, I took that survey one year, and the reason why I wouldn’t put the difference at 10^23 or something extremely large than that is because there are flowthrough effects of other cause areas that still help with longtermist stuff (like, GiveWell has been pretty helpful for also getting more work to happen on longtermist stuff).
I do think that as a cause area from a utilitarian perspective, interventions that affect the longterm future are astronomically more effective than things that help the short term future but are very unlikely to have any effect on the long term, or even slightly harm the longterm.
Sure, though I still think it makes it misleading to say that the survey respondents think “EA should focus entirely on longtermism”.
Seems more accurate to say something like “everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies.”
To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is “let’s create a separate longtermist community, so that I can indeed invest in that in a way that doesn’t get diluted with all the other things that seem relatively unimportant to me”. If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don’t really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.
I’m strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding and it’s fine for you to discuss why you think longtermism is valuable, but it’s not as though anyone gets to tell the community what values the community should have.
The idea that there is a single “good” which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests—if strong-longtermists’ interests really are incompatible with most of EA, as yours seem to be, that’s a huge problem—especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn’t think it was the case that there was such a split, but perhaps I am wrong.
I agree, EA is a movement of different but compatible values, and given its existence, I don’t want to force anything on it, or force anyone to change their values. It’s a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.
I don’t think my interests are incompatible with most of EA, and am not sure why you think that? I’ve clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.
But I think it’s important to be clear which of these benefits are gains from trade, vs. things I “intrinsically care about” (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn’t really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.
Sorry, this was unclear, and I’m both not sure that we disagree, and want to apologize if it seemed like I was implying that you haven’t done a tremendous amount for the community, and didn’t hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community.
I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment, but that point is even farther afield, and needs a different post and a far more in-depth discussion.
Agree it’s more accurate. How I see it: > Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.
Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement.
The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.
Good points, but if I understand what you’re saying, that survey was asking about specific interventions funded by those funds, given our epistemic uncertainties, not the balance of actual value in the near term versus the long term, or what the ideal focus should be if we found the optimal investments for each.
I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me.
As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.
It’s tricky because the paper on strong longtermism makes the theory sound like it does want to completely ignore other causes—eg ‘short-term effects can be ignored’. I think it would be useful to have a source to point to that states ‘the case for longtermism’ without giving the impression that no other causes matter.
Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a “non-rigorous” decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about. No one can live an entirely impartial life and we should recognise that, but this doesn’t necessarily mean that the arguments for the rightness of doing so are wrong.
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by “bracket off the present day section of your commitments away from the totally impartial side.”
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But I’m very unhappy with the claim that “No one can live an entirely impartial life and we should recognise that,” which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we’re saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it’s not something people can do in practice, I’d argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn’t trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spending—which seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn’t the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist’s mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim “the impartial altruist should be a strong longtermist”- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, I’m just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don’t believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don’t think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you don’t have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
I don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).
I agree it’s not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I’m also confident that they wouldn’t agree with “EA should focus entirely on longtermism”.
That’s a very good point—and if that is the entire claim, I would strongly endorse it. But, from what I have read, that is not what strong longtermism actually claims, according to proponents.
I’d like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community’s resources across (longtermist and neartermist) causes:
TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar. If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes. I think that many causes in the effective altruism sphere interact more multiplicatively than additive, implying that it’s important to heavily support multiple causes, not just to focus on the most appealing one.
No-one is proposing we go 100% on strong longtermism, and ignore all other worldviews, uncertainty and moral considerations.
You say:
They wrote a paper about strong longtermism, but this paper is about clearly laying out a philosophical position, and is not intended as an all-considered assessment of what we should do. (Edit: And even the paper is only making a claim about what’s best at the margin; they say in footnote 14 they’re unsure whether strong longtermism would be justified if more resources were already spent on longtermism.)
In The Precipice – which is more intended that way—Toby is clear that he thinks existential risk should be seen as “a” key global priority, rather than “the only” priority.
He also suggests the rough target of spending 0.1% of GDP on reducing existential risk, which is quite a bit less than 100%.
And he’s clearly supported other issues with his life.
Will is taking a similar approach in his new book about longtermism.
Even the most longtermist members of effective altruism typically think we should allocate about 20% of resources to neartermist efforts. No-one says longtermist causes are astronomically more impactful.
I think there are bunch of ways we could make this weakening more precise, whether that’s defining a weaker form of longtermism, worldview diversification, moral uncertainty etc. and that’s an interesting discussion to be had. But I think it’s important to start by pointing out that as far as I’m aware no key researchers hold the position you’re contrasting against.
Not that it undermines your main point—which I agree with, but a fair minority of longtermists certainly say and believe this.
There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).
Basically, in this context the same points apply that Brian Tomasik made in his essay “Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness” (https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/)
I tentatively believe (ii), depending on some definitions. I’m somewhat surprised to see Ben and Darius implying it’s a really weird view, and makes me wonder what I’m missing.
I don’t want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don’t mean indirect effects more broadly in the sense of ‘better health in poor countries’ --> ‘more economic growth’ --> ‘more innovation’)
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community’s skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I’m not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it’s creating a community and culture of founding impact-oriented nonprofits, not because [it’s better for shrimp/there’s less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I’m not sure what counts as ‘astronomically’ more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It’s hard to come up with a good thought experiment here to test this intuition.
One hypothetical is ‘would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well’s Maximum Impact Fund’. This is confusing though, because I’m not sure how important extra funding is in these areas. Another hypothetical is ‘would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)’. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don’t hold the view I do is some combination of (1) ‘this feels weird so maybe it’s wrong’ and (2) ‘I don’t want to be unkind to people working on neartermist causes’.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I’m not sure how much longtermism actually falls into this category.
The idea is not that new, and there’s been quite a lot of energy devoted to criticising the ideas. I don’t know what others in this thread think, but I haven’t found much of this criticism very convincing.
Weak longtermism (future people matter morally) is intuitive for lots of people (though not all, which is fine). I concede strong longtermism is initially very intuitive though
Strong longtermism doesn’t imply we should do particularly weird things. It implies we should do things like: get prepared for pandemics, make it harder for people to create novel pathogens, reduce the risk of nuclear war, take seriously the facts that we can’t get current AI systems to do what we want but AI systems are quickly becoming really impressive, and some/most kinds of trend-extrapolation or forecasts imply AGI in the next 10-120 years. Sure, strong longtermism implies we shouldn’t prioritise helping people in extreme poverty. But helping people in extreme poverty is not the default action, most people don’t spend any resources on that at all. (This is similar to Eliezer’s point above).
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn’t make people who work on other causes feel bad. However, I think it’s possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don’t think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it’s weird, or it feels difficult, or we’re not completely sure. We make tradeoffs even when it feels really hard—like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things:
I don’t try to get everyone I talk to to work on longtermist things. I don’t think that would be good for the people I talk to, the EA community, or the longterm future
I really value hearing arguments against longtermism. These are helpful for finding out if longtermism is wrong, figuring out the best ways to explain longtermism, and spotting potential failure modes of acting on longtermism. I sometimes think about paying someone to write a really good, clear case for why acting on strong longtermism is most likely to be a bad idea
My all-things-considered view is a bit more moderate than this comment suggests, and I’m eager to hear Darius’, Ben’s, and others views on this
This may be the crux—I would not count a ~ 1000x multiplier as anywhere near “astronomical” and should probably have made this clearer in my original comment.
Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, refers to differences in value of something like 1030 x.
All my comment was meant to say is that it seems highly implausible that something like such a 1030x multiplier also applies to claim (ii), regarding the expected cost-effectiveness differences of long-term targeted versus near-term targeted interventions.
It may cause significant confusion if the term “astronomical” is used in one context to refer to a 1030x multiplier and in another context to a 1000x multiplier.
Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don’t we pretty much get to (ii)?
No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly compatible with longtermist interventions being a few orders of magnitude more impactful in expectation than neartermist interventions (but the difference is most likely not astronomical).
Brian Tomasik eloquently discusses this specific question in the above-linked essay. Note that while his essay focuses on charities, the same points likely apply to interventions and causes:
Brian Tomasik further elaborates on similar points in a second essay, Charity Cost-Effectiveness in an Uncertain World. A relevant quote:
Phil Trammell’s point in Which World Gets Saved is also relevant:
For the record I’m not really sure about 1030 times, but I’m open 1000s of times.
Pretty much every action has an expected impact on the future in that we know it will radically alter the future e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn’t necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative—I’m just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.
If I’m utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it’s sort of like comparing 1030 to undefined so it does get a bit weird...).
Does that make any sense?
I think I believe (ii), but it’s complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it’s pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).
Please see my above response to jackmalde’s comment. While I understand and respect your argument, I don’t think we are justified in placing high confidence in this model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately assess the plausibility of these models, we simply shouldn’t place extreme weight on one specific model (and its practical implications) while ignoring other models (which may arrive at the opposite conclusion).
Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.
I’m unwilling to pin this entirely on the epistemic uncertainty, and specifically don’t think everyone agrees that, for example, interventions targeting AI safety aren’t the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
I was talking about the EA Leaders Forum results, where people were asked to compare dollars to the different EA Funds, and most were unwilling to say that one fund was even 100x higher-impact than another; maybe 1000x at the high end. That’s rather a long way from 10^23 times more impactful.
Cool. Yeah, EA funds != cause areas. Because people may think that work done by EA funds in a cause area is net positive, whereas the total of work done in that area is negative. Or they may think that work done on some cause is 1/100th as useful another cause, but only because it might recruit talent to the other, which is the sort of hard-line view that one might want to mention.
Indeed, I took that survey one year, and the reason why I wouldn’t put the difference at 10^23 or something extremely large than that is because there are flowthrough effects of other cause areas that still help with longtermist stuff (like, GiveWell has been pretty helpful for also getting more work to happen on longtermist stuff).
I do think that as a cause area from a utilitarian perspective, interventions that affect the longterm future are astronomically more effective than things that help the short term future but are very unlikely to have any effect on the long term, or even slightly harm the longterm.
Sure, though I still think it makes it misleading to say that the survey respondents think “EA should focus entirely on longtermism”.
Seems more accurate to say something like “everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies.”
To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is “let’s create a separate longtermist community, so that I can indeed invest in that in a way that doesn’t get diluted with all the other things that seem relatively unimportant to me”. If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don’t really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.
I’m strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding and it’s fine for you to discuss why you think longtermism is valuable, but it’s not as though anyone gets to tell the community what values the community should have.
The idea that there is a single “good” which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests—if strong-longtermists’ interests really are incompatible with most of EA, as yours seem to be, that’s a huge problem—especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn’t think it was the case that there was such a split, but perhaps I am wrong.
I think we don’t disagree?
I agree, EA is a movement of different but compatible values, and given its existence, I don’t want to force anything on it, or force anyone to change their values. It’s a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.
I don’t think my interests are incompatible with most of EA, and am not sure why you think that? I’ve clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.
But I think it’s important to be clear which of these benefits are gains from trade, vs. things I “intrinsically care about” (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn’t really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.
Sorry, this was unclear, and I’m both not sure that we disagree, and want to apologize if it seemed like I was implying that you haven’t done a tremendous amount for the community, and didn’t hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community.
I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment, but that point is even farther afield, and needs a different post and a far more in-depth discussion.
Agree it’s more accurate. How I see it:
> Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.
Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement.
The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.
Good points, but if I understand what you’re saying, that survey was asking about specific interventions funded by those funds, given our epistemic uncertainties, not the balance of actual value in the near term versus the long term, or what the ideal focus should be if we found the optimal investments for each.
I do think it is important to distinguish these moral uncertainty reasons from moral trade and cooperation and strategic considerations for hedging. My argument for putting some focus on near-termist causes would be of this latter kind; the putative moral uncertainty/worldview diversification arguments for hedging carry little weight with me.
As an example, Greaves and Ord argue that under the expected choiceworthiness approach, our metanormative ought is practically the same as the total utilitarian ought.
It’s tricky because the paper on strong longtermism makes the theory sound like it does want to completely ignore other causes—eg ‘short-term effects can be ignored’. I think it would be useful to have a source to point to that states ‘the case for longtermism’ without giving the impression that no other causes matter.
Just to second this because it seems to be a really common mistake- Greaves and MacAskill stress in the strong longtermism paper that the aim is to advance an argument about what someone should do with their impartial altruistic budget (of time or resources), not to tell anyone how large that budget should be in the first place.
Also- I think the author would be able to avoid what they see as a “non-rigorous” decision to weight the short-term and long-term the same by reconceptualising the uneasiness around longtermism dominating their actions as an uneasiness with their totally impartial budget taking up more space in their life. I think everyone I have talked to about this feels a pull to support present day people and problems alongside the future, so it might help to just bracket off the present day section of your commitments away from the totally impartial side, especially if the argument against the longtermist conclusion is that it precludes other things you care about. No one can live an entirely impartial life and we should recognise that, but this doesn’t necessarily mean that the arguments for the rightness of doing so are wrong.
Thanks, that is valuable, but there are a couple of pieces here I want to clarify. I agree that there is space for people to have a budget for non-impartial altruistic donations. I am arguing that within the impartial altruistic budget, we should have a place for a balance between discounted values that emphasize the short term and impartial welfarist longtermism. Perhaps this is what you mean by “bracket off the present day section of your commitments away from the totally impartial side.”
For example, I give at least 10% of my budget to altruistic causes, but I reserve some of the money for Givewell, Against Malaria Foundation, and similar charities, rather than focusing entirely on longtermist causes. This is in part moral uncertainty, at least on my part, since even putting aside the predictability argument, the argument for prioritizing possible future lives rests on a few assumptions that are debatable.
But I’m very unhappy with the claim that “No one can live an entirely impartial life and we should recognise that,” which is largely what led to the post. This type of position implies, among other things, that morality is objective and independent of instantiated human values, and that we’re saying everyone is morally compromised. If what we are claiming as impartial welfare maximization requires that philosophical position, and we also agree it’s not something people can do in practice, I’d argue we are doing something fundamentally wrong both in practice, condemning everyone for being immoral while saying they should do better, and in theory, saying that longtermist EA only works given an objective utilitarian position on morality. Thankfully, I disagree, and I think these problems are both at least mostly fixable, hence my (still-insufficient, partially worked out) argument in the post. But I wasn’t trying to solve morality ab initio based on my intuitions. And perhaps I need to extend it to the more general position of how to allocate money and effort across both personal and altruistic spending—which seems like a far larger if not impossible general task.
Thanks for the post and the response David, that helpfully clarifies where you are coming from. What I was trying to get at is that if you want to say that strong longtermism isn’t the correct conclusion for an impartial altruist who wants to know what to do with their resources, then that would call for more argument as to where the strong longtermist’s mistake lies or where the uncertainty should be. On the other hand, it would be perfectly possible to say that the impartial altruist should end up endorsing strong longtermism, while recognising that you yourself are not entirely impartial (and have done with the issue). Personally I also think that strong longtermism relies on very debatable grounds, and I would also put some uncertainty on the claim “the impartial altruist should be a strong longtermist”- the tricky and interesting thing is working out where we disagree with the longtermist.
(also I recognise as you said that this post is not supposed to be a final word on all these problems, I’m just pointing to where the inquiry could go next).
On the second part of your response, I think that depends on what motivates you and what your general worldview is. I don’t believe in objective moral facts, but I also generally see the world as a place where each and all could do better. For some that helps motivate action, for some it causes angst- I don’t think there is a correct view there.
Separately I do actually worry that strong longtermism only works for consequentialists (though you don’t have to believe in objective morality). The recent paper attempts to make the foundations more robust but the work there is still in its infancy. I guess we will see where it goes.
Thanks for the response—I think we mostly agree, at least to the extent that these questions have answers at all.
Definitely, cheers!
I don’t think your point about Toby’s GDP recommendation is inconsistent with David’s claim that Toby/Will seem to imply “Effective Altruism should focus entirely on longtermism” since EA is not in control of all of the world’s GDP. It’s consistent to recommend EA focus entirely on longtermism and that the world spend .1% of GDP on x-risk (or longtermism).
I agree it’s not entailed by that, but both Will and Toby were also in the Leaders Forum Survey I linked to. From knowing them, I’m also confident that they wouldn’t agree with “EA should focus entirely on longtermism”.
That’s a very good point—and if that is the entire claim, I would strongly endorse it. But, from what I have read, that is not what strong longtermism actually claims, according to proponents.
I’d like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community’s resources across (longtermist and neartermist) causes: