Thanks for this! I guess I agree with your overall point that the case isn’t as airtight as it could be. It’s for that reason that I’m happy that the Global Priorities Institute has put longtermism front and centre of their research agenda. I’m not sure I agree with your specific points though.
1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say). This is because, if axiological strong longtermism is true, the vast majority of the value of these actions will in fact be coming from the long-run effects. Ignoring this fact and just doing them based on their short-run effects wouldn’t seem to me to be a great idea, as if we were to come across evidence or otherwise conclude that the action isn’t in fact good from a long-run perspective, we wouldn’t be able to correct for this (and correcting for it would be very important). So I’m not convinced that AL doesn’t imply CL.
2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
I would need to know more about your proposed alternative to comment. I would just point out (something I didn’t mention in my post), that Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects. They argue that the deontic longtermist claim follows from the axiological claim as, if axiological strong longtermism is true it is true by a large margin, and that a plausible non-consequentialist theory has to be sensitive to the axiological stakes, becoming more consequentialist in output as the axiological stakes get higher.
3. Etc
I hope this doesn’t come across as snarky, but “etc.” makes it sound like there is a long list of obvious problems but, to be honest, I’m not sure what these are beyond the ones I mention in my post so it would probably be helpful for you to specify these.
Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.
I agree with your overall point that the case isn’t as airtight as it could be
I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.
I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about xy and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.
Hence thought it might be one for your list.
– –
The specific points 1. and 2. were mostly to serve as examples for the above (the “etc” was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).
Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:
– –
1. The actions that are best in the short run are the same as the ones that are best in the long run
Please assume that by short-term I mean within 100 years, not within 10 years.
A few reasons you might think this is true:
Convergence: See your section on “Longtermists won’t reduce suffering today”. Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years? At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).
On:
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects … the value of these actions will in fact be coming from the long-run effects
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
– –
2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
I agree that AL leads to ‘deontic strong longtermism’.
I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).
I would need to know more about your proposed alternative to comment.
Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading.
I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
I just don’t really see a meaningful / important distinction between AL and CL to be honest. Let’s consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective.
A shortermist might say:I want cultivated meat so that people stop eating animals reducing animal suffering now
A longtermist might say:I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future
In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn’t improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view—and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.
Otherwise, thanks for the reading, I will have a look at some point!
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say).
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects.
Yeah, this is also what came to mind for me when I read weeatquince’s comment. I’d add that Greaves and MacAskill also discuss some possible decision-theoretic objections, including objections to the idea that one should simply make decisions based on what seems to have the highest expected value, and argue that the case for longtermism seems robust to these objections. (I’m not saying they’re definitely right, but rather that they do seem to engage with those potential counterarguments.)
Thanks for this! I guess I agree with your overall point that the case isn’t as airtight as it could be. It’s for that reason that I’m happy that the Global Priorities Institute has put longtermism front and centre of their research agenda. I’m not sure I agree with your specific points though.
I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.
Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say). This is because, if axiological strong longtermism is true, the vast majority of the value of these actions will in fact be coming from the long-run effects. Ignoring this fact and just doing them based on their short-run effects wouldn’t seem to me to be a great idea, as if we were to come across evidence or otherwise conclude that the action isn’t in fact good from a long-run perspective, we wouldn’t be able to correct for this (and correcting for it would be very important). So I’m not convinced that AL doesn’t imply CL.
I would need to know more about your proposed alternative to comment. I would just point out (something I didn’t mention in my post), that Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects. They argue that the deontic longtermist claim follows from the axiological claim as, if axiological strong longtermism is true it is true by a large margin, and that a plausible non-consequentialist theory has to be sensitive to the axiological stakes, becoming more consequentialist in output as the axiological stakes get higher.
I hope this doesn’t come across as snarky, but “etc.” makes it sound like there is a long list of obvious problems but, to be honest, I’m not sure what these are beyond the ones I mention in my post so it would probably be helpful for you to specify these.
Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.
I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.
I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.
Hence thought it might be one for your list.
– –
The specific points 1. and 2. were mostly to serve as examples for the above (the “etc” was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).
Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:
– –
Please assume that by short-term I mean within 100 years, not within 10 years.
A few reasons you might think this is true:
Convergence: See your section on “Longtermists won’t reduce suffering today”. Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years? At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).
On:
I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on. The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.
– –
I agree that AL leads to ‘deontic strong longtermism’.
I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).
Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading.
I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:
https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/
https://www.givewell.org/modeling-extreme-model-uncertainty
– –
Always happy to hear your views. Have a great week
I just don’t really see a meaningful / important distinction between AL and CL to be honest. Let’s consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective.
A shortermist might say: I want cultivated meat so that people stop eating animals reducing animal suffering now
A longtermist might say: I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future
In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn’t improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view—and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.
Otherwise, thanks for the reading, I will have a look at some point!
I think I essentially agree, and I think that these sorts of points are too often ignored. But I don’t 100% agree. In particular, I wouldn’t be massively surprised if, after a few years of relevant research, we basically concluded that there’s a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill’s suggestion of speeding up progress as a possible longtermist priority.)
I’d bet against that, but not with massive odds. (It’d be better for me to operationalise my claim more and put a number on it, rather than making these vague statements—I’m just taking the lazy option to save time.)
And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that’s easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.
I think most longtermists are already doing something that’s not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that’s our ultimate, terminal goal. Or we might even most of the time focus on an even more “proximate” or “merely instrumental” proxy, like “improving institutions’ ability and motivation to respond effectively to [x]”, again as if that’s a terminal goal.
(I mean this to stand in contrast to consciously focusing on “improving the long-term future as much as possible”, and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.)
Then we sometimes check in on whether the proxies we focus on are actually what’s best for the future.
I think this approach makes sense, though it’s also good to remain aware of what’s a proxy and what’s an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)
Yeah, this is also what came to mind for me when I read weeatquince’s comment. I’d add that Greaves and MacAskill also discuss some possible decision-theoretic objections, including objections to the idea that one should simply make decisions based on what seems to have the highest expected value, and argue that the case for longtermism seems robust to these objections. (I’m not saying they’re definitely right, but rather that they do seem to engage with those potential counterarguments.)