Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:
Possible misconception: Someone has made a thorough case for “strong longtermism”
Possible misconception: “Greaves and MacAskill at GPI have set out a detailed argument for strong longtermism.”
My response: “Greaves and MacAskill argue for ‘axiological strong longtermism’ but this is not sufficient to make the case that what we ought to do is mainly determined by focusing on far future effects”
Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”
The colloquial use of strong longtermism on this forum (CL) is something like “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions”.
Now there are a few reasons why this might not follow (why CL might not follow from AL):
The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
Etc
Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.
– –
This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill’s arguments are poor it would suggest being careful not to overstate their claims.
Thanks for this! I guess I agree with your overall point that the case isn’t as airtight as it could be. It’s for that reason that I’m happy that the Global Priorities Institute has put longtermism front and centre of their research agenda. I’m not sure I agree with your specific points though.
1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say). This is because, if axiological strong longtermism is true, the vast majority of the value of these actions will in fact be coming from the long-run effects. Ignoring this fact and just doing them based on their short-run effects wouldn’t seem to me to be a great idea, as if we were to come across evidence or otherwise conclude that the action isn’t in fact good from a long-run perspective, we wouldn’t be able to correct for this (and correcting for it would be very important). So I’m not convinced that AL doesn’t imply CL.
2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
I would need to know more about your proposed alternative to comment. I would just point out (something I didn’t mention in my post), that Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects. They argue that the deontic longtermist claim follows from the axiological claim as, if axiological strong longtermism is true it is true by a large margin, and that a plausible non-consequentialist theory has to be sensitive to the axiological stakes, becoming more consequentialist in output as the axiological stakes get higher.
3. Etc
I hope this doesn’t come across as snarky, but “etc.” makes it sound like there is a long list of obvious problems but, to be honest, I’m not sure what these are beyond the ones I mention in my post so it would probably be helpful for you to specify these.
Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.
I agree with your overall point that the case isn’t as airtight as it could be
I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.
I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about xy and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.
Hence thought it might be one for your list.
– –
The specific points 1. and 2. were mostly to serve as examples for the above (the “etc” was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).
Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:
– –
1. The actions that are best in the short run are the same as the ones that are best in the long run
Please assume that by short-term I mean within 100 years, not within 10 years.
A few reasons you might think this is true:
Convergence: See your section on “Longtermists won’t reduce suffering today”. Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years? At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).
On:
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects … the value of these actions will in fact be coming from the long-run effects
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
– –
2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
I agree that AL leads to ‘deontic strong longtermism’.
I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).
I would need to know more about your proposed alternative to comment.
Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading.
I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
I just don’t really see a meaningful / important distinction between AL and CL to be honest. Let’s consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective.
A shortermist might say:I want cultivated meat so that people stop eating animals reducing animal suffering now
A longtermist might say:I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future
In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn’t improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view—and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.
Otherwise, thanks for the reading, I will have a look at some point!
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say).
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects.
Yeah, this is also what came to mind for me when I read weeatquince’s comment. I’d add that Greaves and MacAskill also discuss some possible decision-theoretic objections, including objections to the idea that one should simply make decisions based on what seems to have the highest expected value, and argue that the case for longtermism seems robust to these objections. (I’m not saying they’re definitely right, but rather that they do seem to engage with those potential counterarguments.)
I agree that CL may or may not follow from AL depending on one’s other ethical and empirical views.
However, I’m not sure I understand if and why you think this is a problem for longtermism specifically, as opposed to effective altruism more broadly. For instance, consider the typical EA argument for donating to more rather than less effective global health charities. I think that argument essentially is that donating to a more effective charity has better ex-ante effects.
Put differently, I think many EAs donate to AMF because they believe that GiveWell has established that marginal donations to AMF have pretty good ex-ante effects compared to other donation options (at least if we only look at a certain type of effect, namely short-term effects on human beneficiaries). But I haven’t seen many people arguing on the EA Forum that, actually, it is a misconception that someone has made a thorough case for donating to AMF because maybe making decisions solely by evaluating ex-ante effects is not a useful way of interacting with the world. [1]
So you directing a parallel criticism at longtermism specifically leaves me a little confused. Perhaps I’m misunderstanding you?
(I’m setting aside your potential empirical defeater ‘1.’ since I largely agree with the discussion on it in the other responses to your comment. I.e. I think it is countered strongly, though not absolutely decisively, by the ‘beware suspicious convergence’ argument.)
[1] People have claimed that there isn’t actually a strong case for donating to AMF; but usually such arguments are based on types of effects (e.g. on nonhuman animals or on far-future outcomes) that the standard pro-AMF case allegedly doesn’t sufficiently consider rather than on claims that, actually, ex-ante effects are the wrong kind of thing to pay attention to in the first place.
tl;dr – The case for giving to GiveWell top charities is based on much more more than just expected value calculations.
The case for longtermism (CL) is not based on much more than expected value calculations, in fact many non-expected value arguments currently seem to point the other way. This has lead to a situation where there are many weak arguments against longtermsim and one very strong argument for longtermism. This is hard to evaluate.
We (longtermists) should recognise that we are new and there is still work to be done to build a good theoretical base for longtermism.
Hi Max,
Good question. Thank you for asking.
– –
The more I have read by GiveWell (and to a lesser degree by groups such as Charity Entrepreneurship and Open Philanthropy) the more it is apparent to me that the case for giving to the global poor is not based solely on expected value but is based on a very broad variety of arguments.
The rough pattern of these posts is that taking a broad variety of different decision making tools and approaches and seeing where they all converge and point too is better than just looking at expected value (or using any other single tool). That expected value calculations are not the only way to make decisions and that the arguments for giving to the global poor would be unconvincing if solely based on expected value cautions and not on historical evidence, good feedback loops, expert views, strategic considerations, etc, etc. then the authors would not be convinced.
For example in [1.] Holden describes how he was initially sceptical that: ”donations can do more good when targeting the developing-world poor rather than the developed-world poor “ but he goes onto says that: ”many (including myself) take these arguments more seriously on learning things like “people I respect mostly agree with this conclusion”; “developing-world charities’ activities are generally more robustly evidence-supported, in addition to cheaper”; “thorough, skeptical versions of ‘cost per life saved’ estimates are worse than the figures touted by charities, but still impressive”; “differences in wealth are so pronounced that “hunger” is defined completely differently for the U.S. vs. developing countries“; “aid agencies were behind undisputed major achievements such as the eradication of smallpox”; etc.”
– –
Now I am actually somewhat sceptical of some of this writing. I think much of it is a pushback against longtermism. Remember the global development EAs have had to weather the transition from “give to global health, it has the highest expected value” to “give to global health, it doesn’t have the highest expected value (longtermism has that) but is good for many other reasons”. So it is not surprising that they have gone on to express that there are many other reasons to care about global health that are not based in expected value calculations.
– –
But that possible “status quo bias” does not mean they are wrong. It is still the case that GiveWell have made a host of arguments for global health beyond expected value and that the longtermsim community has not done so. The longtermism community has not produced historical evidence or highlighted successful feedback loops or demonstrated that their reasoning is robust to a broad variety of possible worldviews or built strong expert consensus. (Although the case has been made that preventing extreme risks is robust to very many possible futures, so that at least is a good longtermist argument that is not based on expected value.)
In fact to some degree the opposite is the case. People who argue against longtermism have pointed to cases were long-term type planning historically led to totalitarianism or to the common-sense weirdness of longtermist conclusions etc. My own work into risk management suggests that especially when planning for disasters it is good to not put too much weight on expected value but to assume that something unexpected will happen.
The fact is that the longtermist community has much more weird conclusions than the global health community yet has put much less effort into justifying those conclusions.
– –
To me it looks like all this has lead to a situation where there are many weak arguments against longtermsim (CL) and one very strong argument for longtermism (AL->CL). This is problematic as it is very hard to compare one strong argument against many weak arguments and which side you fall on will depend largely on your empirical views and how you weigh up evidence. This ultimately leads to unconstructive debate.
– –
I think the longtermist view is likely roughly correct. But I think that the case for longtermism has not be made rigorously or even particularly well (certainly it does not stand up well to Holden’s “cluster thinking” ideals). I don’t see this as a criticism of the longtermist community as the community is super new and the paper arguing the case even just from the point of view of expected value is still in draft! I just think it is a misconception worth adding to the list that the community has finished making the case for longtermism – we should recognise our newness and that there is still work to be done and not pretend we have all the answers. The EA global health community has build this broad theoretical bases beyond expected value and so can we, or we can at least try.
– –
I would be curious to know the extent to which you agree with this?
Also, I think this way of mapping situation is a bit more nuanced here than in my previous comment so I want to acknowledge a subtle changing of views between by earlier comment and this one, ask that if you respond you respond to the views as set out here rather than above and of course thank you for your insightful comment that lead to my views evolving – thank you Max!
– – – –
(PS. On the other topic you mention. [Edited: I am not yet sure of the extent to which I think] the ‘beware suspicious convergence’ counter-argument [applies] in this context. Is it suspicious that if you make a plan for 1000 years it looks very similar to if you make a plan for 10000 years? Is it suspicious that if I plan for 100000 years or 100 years what I do in the next 10 years looks the same? Is it suspicious that if I want to go from my house in the UK to Oslo the initial steps are very similar to if I want to go from my house to Australia – ie. book ticket, get bus to train station, get train to airport? Etc? [Would need to give this more thought but it is not obvious] )
The cases for specific priorities or interventions that are commonly advocated based on a longtermist perspective (e.g. “work on technical AI safety”) are usually far from watertight. It could be valuable to improve them, by making them more “robust” or otherwise.
Expected-value calculations that are based on a single quantitative model have significant limitations. They can be useful as one of many inputs to a decision, but it would usually be bad to use them as one’s sole decision tool.
(I am actually a big fan of the GiveWell/Holden Karnofsky posts you link to. When I disagree with other people it often comes down to me favoring more “cluster thinking”. For instance, these days this happens a lot to me when talking to people about AI timelines, or other aspects of AI risk.)
However, I think I disagree with your characterization of the case for CL more broadly, at least for certain uses/meanings of CL.
Here is one version of CL which I believe is based on much more than just expected-value calculations within a single model: This is roughly the claim that (i) in our project of doing as much good as possible we should at the highestlevel be mostly guided by very long-run effects and (ii) this makes an actual difference for how we plan and prioritize at intermediate levels.
Here are I have a picture in mind that is roughly as follows:
Lowest level: Which among several available actions should I take right now?
Intermediate levels:
What are the “methods” and inputs (quantitative models, heuristics, intuitions, etc.) I should use when thinking about the lowest level?
What systems, structures, and incentives should we put in place to “optimize” which lowest-level decision situations I and other agents find ourselves in in the first place?
How do I in turn best think about which methods, systems, structures, etc. to use for answering these intermediate-level questions?
Etc.
Highest level: How should I ultimately evaluate the intermediate levels?
So the following would be one instance of part (i) of my favored CL claim: When deciding whether to use cluster thinking or sequence thinking for a decision, we should aim to choose whichever type of thinking best helps us find the option with most valuable long-run effects. For this it is not required that I make the choice between sequence thinking or cluster thinking by an expected-value calculation, or indeed any direct appeal to any long-run effects. But, ultimately, if I think that, say, cluster thinking is superior to sequence thinking for the matter at hand, then I do so because I think this will lead to the best long-run consequences.
And these would be an instances of part (ii): That often we should decide primarily based on the proxy of “what does most reduce existential risk?”; that it seems good to increase the “representation” of future generations in various political contexts; etc.
Regarding what the case for this version of CL rests on:
For part (i), I think it’s largely a matter of ethics/philosophy, plus some high-level empirical claims about the world (the future being big etc.). Overall very similar to the case for AL. I think the ethics part is less in need of “cluster thinking”, “robustness” etc. And that the empirical part is, in fact, quite “robustly” supported.
[This point made me most want to push back against your initial claim about CL:] For part (ii), I think there are several examples of proxy goals, methods, interventions, etc., that are commonly pursued by longtermists which have a somewhat robust case behind them that does not just rely on an expected value estimate based on a single quantitative model. For instance, avoiding extinction seems very important from a variety of moral perspectives as well as common sense, there are historical precedents of research and advocacy at least partly motivated by this goal (e.g. nuclear winter, asteroid detection, perhaps even significant parts of environmentalism), there is a robust case for several risks longtermists commonly worry about (including AI), etc. More broadly, conversations involving explicit expected value estimates, quantitative models, etc. are only a fraction of the longtermist conversations I’m seeing. (If anything I might think that longtermists, at least in some contexts, make too little use of these tools.) E.g. look at the frontpage of LessWrong, or their curated content. I’m certainly not among the biggest fans of LessWrong or the rationality community, but I think it would be fairly inaccurate to say that a lot of what is happening there is people making explicit expected value estimates. Ditto for longtermist content featured in the EA Newsletter, etc. etc. I struggle to think of any example I’ve seen where a longtermist has made an important decision based just on a single EV estimate.
Rereading your initial comment introducing AL and CL, I’m less sure if by CL you had in mind something similar to what I’m defending above. There certainly are other readings that seem to hinge more on explicit EV reasoning or that are just absurd, e.g. “CL = never explicitly reason about anything happening in the next 100 years”. However, I’m less interested in these versions since they to me would seem to be a poor description of how longtermists actually reason and act in practice.
Not sure this “many weak arguments” way of looking at it is quite correct either had a quick look at the arguments given against longtermism and there are not that many of them. Maybe a better point is that there are many avenues and approaches that remain unexplored.
Thank you for this Jack.
Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:
Possible misconception: Someone has made a thorough case for “strong longtermism”
Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”
The colloquial use of strong longtermism on this forum (CL) is something like “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions”.
Now there are a few reasons why this might not follow (why CL might not follow from AL):
The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
Etc
Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.
– –
This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill’s arguments are poor it would suggest being careful not to overstate their claims.
(PS. Both 1 and 2 seem likely to be true to me)
Thanks for this! I guess I agree with your overall point that the case isn’t as airtight as it could be. It’s for that reason that I’m happy that the Global Priorities Institute has put longtermism front and centre of their research agenda. I’m not sure I agree with your specific points though.
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say). This is because, if axiological strong longtermism is true, the vast majority of the value of these actions will in fact be coming from the long-run effects. Ignoring this fact and just doing them based on their short-run effects wouldn’t seem to me to be a great idea, as if we were to come across evidence or otherwise conclude that the action isn’t in fact good from a long-run perspective, we wouldn’t be able to correct for this (and correcting for it would be very important). So I’m not convinced that AL doesn’t imply CL.
I would need to know more about your proposed alternative to comment. I would just point out (something I didn’t mention in my post), that Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects. They argue that the deontic longtermist claim follows from the axiological claim as, if axiological strong longtermism is true it is true by a large margin, and that a plausible non-consequentialist theory has to be sensitive to the axiological stakes, becoming more consequentialist in output as the axiological stakes get higher.
I hope this doesn’t come across as snarky, but “etc.” makes it sound like there is a long list of obvious problems but, to be honest, I’m not sure what these are beyond the ones I mention in my post so it would probably be helpful for you to specify these.
Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.
I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.
I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.
Hence thought it might be one for your list.
– –
The specific points 1. and 2. were mostly to serve as examples for the above (the “etc” was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).
Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:
– –
Please assume that by short-term I mean within 100 years, not within 10 years.
A few reasons you might think this is true:
Convergence: See your section on “Longtermists won’t reduce suffering today”. Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years? At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).
On:
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
– –
I agree that AL leads to ‘deontic strong longtermism’.
I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).
Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading.
I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:
https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/
https://www.givewell.org/modeling-extreme-model-uncertainty
– –
Always happy to hear your views. Have a great week
I just don’t really see a meaningful / important distinction between AL and CL to be honest. Let’s consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective.
A shortermist might say: I want cultivated meat so that people stop eating animals reducing animal suffering now
A longtermist might say: I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future
In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn’t improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view—and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.
Otherwise, thanks for the reading, I will have a look at some point!
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
Yeah, this is also what came to mind for me when I read weeatquince’s comment. I’d add that Greaves and MacAskill also discuss some possible decision-theoretic objections, including objections to the idea that one should simply make decisions based on what seems to have the highest expected value, and argue that the case for longtermism seems robust to these objections. (I’m not saying they’re definitely right, but rather that they do seem to engage with those potential counterarguments.)
I agree that CL may or may not follow from AL depending on one’s other ethical and empirical views.
However, I’m not sure I understand if and why you think this is a problem for longtermism specifically, as opposed to effective altruism more broadly. For instance, consider the typical EA argument for donating to more rather than less effective global health charities. I think that argument essentially is that donating to a more effective charity has better ex-ante effects.
Put differently, I think many EAs donate to AMF because they believe that GiveWell has established that marginal donations to AMF have pretty good ex-ante effects compared to other donation options (at least if we only look at a certain type of effect, namely short-term effects on human beneficiaries). But I haven’t seen many people arguing on the EA Forum that, actually, it is a misconception that someone has made a thorough case for donating to AMF because maybe making decisions solely by evaluating ex-ante effects is not a useful way of interacting with the world. [1]
So you directing a parallel criticism at longtermism specifically leaves me a little confused. Perhaps I’m misunderstanding you?
(I’m setting aside your potential empirical defeater ‘1.’ since I largely agree with the discussion on it in the other responses to your comment. I.e. I think it is countered strongly, though not absolutely decisively, by the ‘beware suspicious convergence’ argument.)
[1] People have claimed that there isn’t actually a strong case for donating to AMF; but usually such arguments are based on types of effects (e.g. on nonhuman animals or on far-future outcomes) that the standard pro-AMF case allegedly doesn’t sufficiently consider rather than on claims that, actually, ex-ante effects are the wrong kind of thing to pay attention to in the first place.
tl;dr – The case for giving to GiveWell top charities is based on much more more than just expected value calculations.
The case for longtermism (CL) is not based on much more than expected value calculations, in fact many non-expected value arguments currently seem to point the other way. This has lead to a situation where there are many weak arguments against longtermsim and one very strong argument for longtermism. This is hard to evaluate.
We (longtermists) should recognise that we are new and there is still work to be done to build a good theoretical base for longtermism.
Hi Max,
Good question. Thank you for asking.
– –
The more I have read by GiveWell (and to a lesser degree by groups such as Charity Entrepreneurship and Open Philanthropy) the more it is apparent to me that the case for giving to the global poor is not based solely on expected value but is based on a very broad variety of arguments.
For example I recommend reading:
https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking
https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
https://www.givewell.org/modeling-extreme-model-uncertainty
https://forum.effectivealtruism.org/posts/h6uXkwFzqqr2JdZ4e/joey-savoie-tools-for-decision-making
The rough pattern of these posts is that taking a broad variety of different decision making tools and approaches and seeing where they all converge and point too is better than just looking at expected value (or using any other single tool). That expected value calculations are not the only way to make decisions and that the arguments for giving to the global poor would be unconvincing if solely based on expected value cautions and not on historical evidence, good feedback loops, expert views, strategic considerations, etc, etc. then the authors would not be convinced.
For example in [1.] Holden describes how he was initially sceptical that:
”donations can do more good when targeting the developing-world poor rather than the developed-world poor “
but he goes onto says that:
”many (including myself) take these arguments more seriously on learning things like “people I respect mostly agree with this conclusion”; “developing-world charities’ activities are generally more robustly evidence-supported, in addition to cheaper”; “thorough, skeptical versions of ‘cost per life saved’ estimates are worse than the figures touted by charities, but still impressive”; “differences in wealth are so pronounced that “hunger” is defined completely differently for the U.S. vs. developing countries“; “aid agencies were behind undisputed major achievements such as the eradication of smallpox”; etc.”
– –
Now I am actually somewhat sceptical of some of this writing. I think much of it is a pushback against longtermism. Remember the global development EAs have had to weather the transition from “give to global health, it has the highest expected value” to “give to global health, it doesn’t have the highest expected value (longtermism has that) but is good for many other reasons”. So it is not surprising that they have gone on to express that there are many other reasons to care about global health that are not based in expected value calculations.
– –
But that possible “status quo bias” does not mean they are wrong. It is still the case that GiveWell have made a host of arguments for global health beyond expected value and that the longtermsim community has not done so. The longtermism community has not produced historical evidence or highlighted successful feedback loops or demonstrated that their reasoning is robust to a broad variety of possible worldviews or built strong expert consensus. (Although the case has been made that preventing extreme risks is robust to very many possible futures, so that at least is a good longtermist argument that is not based on expected value.)
In fact to some degree the opposite is the case. People who argue against longtermism have pointed to cases were long-term type planning historically led to totalitarianism or to the common-sense weirdness of longtermist conclusions etc. My own work into risk management suggests that especially when planning for disasters it is good to not put too much weight on expected value but to assume that something unexpected will happen.
The fact is that the longtermist community has much more weird conclusions than the global health community yet has put much less effort into justifying those conclusions.
– –
To me it looks like all this has lead to a situation where there are many weak arguments against longtermsim (CL) and one very strong argument for longtermism (AL->CL). This is problematic as it is very hard to compare one strong argument against many weak arguments and which side you fall on will depend largely on your empirical views and how you weigh up evidence. This ultimately leads to unconstructive debate.
– –
I think the longtermist view is likely roughly correct. But I think that the case for longtermism has not be made rigorously or even particularly well (certainly it does not stand up well to Holden’s “cluster thinking” ideals). I don’t see this as a criticism of the longtermist community as the community is super new and the paper arguing the case even just from the point of view of expected value is still in draft! I just think it is a misconception worth adding to the list that the community has finished making the case for longtermism – we should recognise our newness and that there is still work to be done and not pretend we have all the answers. The EA global health community has build this broad theoretical bases beyond expected value and so can we, or we can at least try.
– –
I would be curious to know the extent to which you agree with this?
Also, I think this way of mapping situation is a bit more nuanced here than in my previous comment so I want to acknowledge a subtle changing of views between by earlier comment and this one, ask that if you respond you respond to the views as set out here rather than above and of course thank you for your insightful comment that lead to my views evolving – thank you Max!
– –
– –
(PS. On the other topic you mention. [Edited: I am not yet sure of the extent to which I think] the ‘beware suspicious convergence’ counter-argument [applies] in this context. Is it suspicious that if you make a plan for 1000 years it looks very similar to if you make a plan for 10000 years? Is it suspicious that if I plan for 100000 years or 100 years what I do in the next 10 years looks the same? Is it suspicious that if I want to go from my house in the UK to Oslo the initial steps are very similar to if I want to go from my house to Australia – ie. book ticket, get bus to train station, get train to airport? Etc? [Would need to give this more thought but it is not obvious] )
Hi Sam, thank you for your thoughtful reply.
Here are some things we seem to agree on:
The cases for specific priorities or interventions that are commonly advocated based on a longtermist perspective (e.g. “work on technical AI safety”) are usually far from watertight. It could be valuable to improve them, by making them more “robust” or otherwise.
Expected-value calculations that are based on a single quantitative model have significant limitations. They can be useful as one of many inputs to a decision, but it would usually be bad to use them as one’s sole decision tool.
(I am actually a big fan of the GiveWell/Holden Karnofsky posts you link to. When I disagree with other people it often comes down to me favoring more “cluster thinking”. For instance, these days this happens a lot to me when talking to people about AI timelines, or other aspects of AI risk.)
However, I think I disagree with your characterization of the case for CL more broadly, at least for certain uses/meanings of CL.
Here is one version of CL which I believe is based on much more than just expected-value calculations within a single model: This is roughly the claim that (i) in our project of doing as much good as possible we should at the highest level be mostly guided by very long-run effects and (ii) this makes an actual difference for how we plan and prioritize at intermediate levels.
Here are I have a picture in mind that is roughly as follows:
Lowest level: Which among several available actions should I take right now?
Intermediate levels:
What are the “methods” and inputs (quantitative models, heuristics, intuitions, etc.) I should use when thinking about the lowest level?
What systems, structures, and incentives should we put in place to “optimize” which lowest-level decision situations I and other agents find ourselves in in the first place?
How do I in turn best think about which methods, systems, structures, etc. to use for answering these intermediate-level questions?
Etc.
Highest level: How should I ultimately evaluate the intermediate levels?
So the following would be one instance of part (i) of my favored CL claim: When deciding whether to use cluster thinking or sequence thinking for a decision, we should aim to choose whichever type of thinking best helps us find the option with most valuable long-run effects. For this it is not required that I make the choice between sequence thinking or cluster thinking by an expected-value calculation, or indeed any direct appeal to any long-run effects. But, ultimately, if I think that, say, cluster thinking is superior to sequence thinking for the matter at hand, then I do so because I think this will lead to the best long-run consequences.
And these would be an instances of part (ii): That often we should decide primarily based on the proxy of “what does most reduce existential risk?”; that it seems good to increase the “representation” of future generations in various political contexts; etc.
Regarding what the case for this version of CL rests on:
For part (i), I think it’s largely a matter of ethics/philosophy, plus some high-level empirical claims about the world (the future being big etc.). Overall very similar to the case for AL. I think the ethics part is less in need of “cluster thinking”, “robustness” etc. And that the empirical part is, in fact, quite “robustly” supported.
[This point made me most want to push back against your initial claim about CL:] For part (ii), I think there are several examples of proxy goals, methods, interventions, etc., that are commonly pursued by longtermists which have a somewhat robust case behind them that does not just rely on an expected value estimate based on a single quantitative model. For instance, avoiding extinction seems very important from a variety of moral perspectives as well as common sense, there are historical precedents of research and advocacy at least partly motivated by this goal (e.g. nuclear winter, asteroid detection, perhaps even significant parts of environmentalism), there is a robust case for several risks longtermists commonly worry about (including AI), etc. More broadly, conversations involving explicit expected value estimates, quantitative models, etc. are only a fraction of the longtermist conversations I’m seeing. (If anything I might think that longtermists, at least in some contexts, make too little use of these tools.) E.g. look at the frontpage of LessWrong, or their curated content. I’m certainly not among the biggest fans of LessWrong or the rationality community, but I think it would be fairly inaccurate to say that a lot of what is happening there is people making explicit expected value estimates. Ditto for longtermist content featured in the EA Newsletter, etc. etc. I struggle to think of any example I’ve seen where a longtermist has made an important decision based just on a single EV estimate.
Rereading your initial comment introducing AL and CL, I’m less sure if by CL you had in mind something similar to what I’m defending above. There certainly are other readings that seem to hinge more on explicit EV reasoning or that are just absurd, e.g. “CL = never explicitly reason about anything happening in the next 100 years”. However, I’m less interested in these versions since they to me would seem to be a poor description of how longtermists actually reason and act in practice.
Not sure this “many weak arguments” way of looking at it is quite correct either had a quick look at the arguments given against longtermism and there are not that many of them. Maybe a better point is that there are many avenues and approaches that remain unexplored.