I have a post where I address what I see as misconceptions about longtermism. In response to Future people count, but less than present people I would recommend you read the “Longtermists have to think future people have the same moral value as people today”section. In short, I don’t think future people counting for less really dents longtermism that much at all as it isn’t reasonable to discount that much. You seem to accept that we can’t discount that much, so if you accept the other core claims of the argument longtermism will still go through. Discounting future people less is pretty irrelevant in my opinion.
I want to read that Thorstad paper and until I do can’t really respond. I would say however that even if the expected number of people in the future isn’t as high as many longtermists have claimed, it’s still got to be at least somewhat large and large enough to mean GiveWell charities that focus on near-term effects aren’t the best we can do. One could imagine being a ‘medium-termist’ and wanting to say address climate change and boost economic growth which affect the medium and long-term. Moving to GiveWell would seem to me to be overcorrecting.
The assumption that future people will be happy isn’t required for longtermism (as you seem to imply). The value of reducing extinction risk does depend on future people being happy (or at least above the zero level of wellbeing), but there are longtermist approaches that don’t involve reducing extinction risk. My post touches on some of these in the Sketch of the strong longtermist argument section. For example mitigating climate change, ensuring good institutions develop, and ensuring AI is aligned to benefit human wellbeing.
You say that some risks such as those from AGI or biological weapons are “less empirical and more based on intuitions or unverifiable claims, and hence near-impossible to argue against”. I think one can argue against these risks. For example, David Thorstad argues that variousassumptions underlying the singularity hypothesis are substantially less plausible than its advocates suppose, arguing that this should allay fears related to existential risk from AI. You can point out weaknesses in the arguments for specific existential risks, it just takes some effort! Personally I think the risks are credible enough to take them seriously, especially given how bad the outcomes would be.
Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, it’s been a busy few of weeks!
In response to your points.
In short, I don’t think future people counting for less really dents longtermism that much at all as it isn’t reasonable to discount that much. You seem to accept that we can’t discount that much, so if you accept the other core claims of the argument longtermism will still go through. Discounting future people less is pretty irrelevant in my opinion.
I suspect this depends strongly on your overall shape for the value of the future. If you have infinite exponential growth you’re correct. For, in my opinion, more reasonable shapes of future value then this will probably start mattering. In any case, it damages the case for future people to some extent but I agree it is not fatal.
I would say however that even if the expected number of people in the future isn’t as high as many longtermists have claimed, it’s still got to be at least somewhat large and large enough to mean GiveWell charities that focus on near-term effects aren’t the best we can do. One could imagine being a ‘medium-termist’ and wanting to say address climate change and boost economic growth which affect the medium and long-term. Moving to GiveWell would seem to me to be overcorrecting.
Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don’t think we can resolve this without being quantative.
The assumption that future people will be happy isn’t required for longtermism (as you seem to imply). The value of reducing extinction risk does depend on future people being happy (or at least above the zero level of wellbeing), but there are longtermist approaches that don’t involve reducing extinction risk. My post touches on some of these in the Sketch of the strong longtermist argument section.
I’m pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.
You can point out weaknesses in the arguments for specific existential risks, it just takes some effort!
You can, and sometimes (albeit rarely) these arguments are productive, but I still think any numeric estimate you end up with is pretty much just based on intuitions and heavily on priors.
Personally I think the risks are credible enough to take them seriously, especially given how bad the outcomes would be.
Yes, we should certainly take them seriously. But “seriously” is rather imprecise to suggest how many resources we should be willing to throw at it.
For, in my opinion, more reasonable shapes of future value then this will probably start mattering.
Did you read the link I sent? I don’t see how it is reasonable to discount very much. I would discount distant future people as much as I would discount distant geographic people (people who are alive today but are not near me). That is to say, not very much.
Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don’t think we can resolve this without being quantative.
That is fair, and something I think would be worthwhile. It might be something I try to do at some point. However I would also note the problem of cluelessness which I think is a particular issue for neartermist interventions (see here for my short description of the issue and here for a bit longer). In short—I don’t think we actually have a clear sense of the cost-effectiveness of neartermist interventions. I could do a BOTEC and compare to GiveWell’s estimates, but I also think GiveWell’s estimates miss out far too many effects to be very meaningful.
I’m pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.
Feels weird to dismiss a whole class of interventions without justification. Certainly mitigating climate change is tractable. Boosting technological progress / economic growth also seems tractable. I can also think of ways to improve values.
Yes, we should certainly take them seriously. But “seriously” is rather imprecise to suggest how many resources we should be willing to throw at it.
I do personally think that, on the margin, all resources should be going to longtermist work.
I have a post where I address what I see as misconceptions about longtermism. In response to Future people count, but less than present people I would recommend you read the “Longtermists have to think future people have the same moral value as people today” section. In short, I don’t think future people counting for less really dents longtermism that much at all as it isn’t reasonable to discount that much. You seem to accept that we can’t discount that much, so if you accept the other core claims of the argument longtermism will still go through. Discounting future people less is pretty irrelevant in my opinion.
I want to read that Thorstad paper and until I do can’t really respond. I would say however that even if the expected number of people in the future isn’t as high as many longtermists have claimed, it’s still got to be at least somewhat large and large enough to mean GiveWell charities that focus on near-term effects aren’t the best we can do. One could imagine being a ‘medium-termist’ and wanting to say address climate change and boost economic growth which affect the medium and long-term. Moving to GiveWell would seem to me to be overcorrecting.
The assumption that future people will be happy isn’t required for longtermism (as you seem to imply). The value of reducing extinction risk does depend on future people being happy (or at least above the zero level of wellbeing), but there are longtermist approaches that don’t involve reducing extinction risk. My post touches on some of these in the Sketch of the strong longtermist argument section. For example mitigating climate change, ensuring good institutions develop, and ensuring AI is aligned to benefit human wellbeing.
You say that some risks such as those from AGI or biological weapons are “less empirical and more based on intuitions or unverifiable claims, and hence near-impossible to argue against”. I think one can argue against these risks. For example, David Thorstad argues that various assumptions underlying the singularity hypothesis are substantially less plausible than its advocates suppose, arguing that this should allay fears related to existential risk from AI. You can point out weaknesses in the arguments for specific existential risks, it just takes some effort! Personally I think the risks are credible enough to take them seriously, especially given how bad the outcomes would be.
Thank you for the feedback on both the arguments and writing (something I am aiming to improve through this writing). Sorry for being slow to respond, it’s been a busy few of weeks!
In response to your points.
I suspect this depends strongly on your overall shape for the value of the future. If you have infinite exponential growth you’re correct. For, in my opinion, more reasonable shapes of future value then this will probably start mattering. In any case, it damages the case for future people to some extent but I agree it is not fatal.
Interesting claim. I would be very interested in a cost-effectiveness analysis (even at BOTEC level) to support this. I don’t think we can resolve this without being quantative.
I’m pretty sceptical of the tractability of non-x-risk work and our ability to shape the future in broad terms.
You can, and sometimes (albeit rarely) these arguments are productive, but I still think any numeric estimate you end up with is pretty much just based on intuitions and heavily on priors.
Yes, we should certainly take them seriously. But “seriously” is rather imprecise to suggest how many resources we should be willing to throw at it.
Did you read the link I sent? I don’t see how it is reasonable to discount very much. I would discount distant future people as much as I would discount distant geographic people (people who are alive today but are not near me). That is to say, not very much.
That is fair, and something I think would be worthwhile. It might be something I try to do at some point. However I would also note the problem of cluelessness which I think is a particular issue for neartermist interventions (see here for my short description of the issue and here for a bit longer). In short—I don’t think we actually have a clear sense of the cost-effectiveness of neartermist interventions. I could do a BOTEC and compare to GiveWell’s estimates, but I also think GiveWell’s estimates miss out far too many effects to be very meaningful.
Feels weird to dismiss a whole class of interventions without justification. Certainly mitigating climate change is tractable. Boosting technological progress / economic growth also seems tractable. I can also think of ways to improve values.
I do personally think that, on the margin, all resources should be going to longtermist work.