I think a caricature/extreme version of a related view would be “Progress is practically guaranteed to continue to the point where everything eventually becomes as good as it could be. Therefore, there’s no need to try to improve the long-run future, and what we should do is just make things go better until that point, or help us get to that point faster.”
I don’t know if anyone confidently holds that a view quite that extreme. But I think it’s relatively common to think that there’s a decent chance that something like that is true, and that that’s probably one of the common reasons for people not prioritising “longtermist interventions”.
Personally, I think that believing there’s a decent chance that something like that is true probably makes sense. However, I currently believe it’s sufficiently likely that we’re at something like a hinge of history, where that march of progress could be foiled, that longtermist work makes sense. And I also believe we can reach a similar conclusion from the idea that, even if we avoid x-risks and bad lock-in, we may not be guaranteed to reach an optimal point “by default” (e.g., maybe moral circles won’t expand far enough, or we’ll get stuck in some bad equilibria), so longtermist “trajectory change” work could be valuable.
(My point here is more to try to highlight some views than to argue for or against them.)
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
I think there’s a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., we’ll develop very good AI—doesn’t necessarily have to be agenty or superintelligent—and that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesn’t seem very distinct to me from the other points. Basically, you might think we’ll ultimately reach a fairly optimal state, so speeding things up won’t change that, but it’ll change how much suffering/joy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe I’m not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other “trajectory change” efforts. I’ll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ‘buy some QALYS now before the singularity’ and the other seems like ‘make the singularity happen sooner’ (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I’m not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to—looking forward to reading it.
I think a caricature/extreme version of a related view would be “Progress is practically guaranteed to continue to the point where everything eventually becomes as good as it could be. Therefore, there’s no need to try to improve the long-run future, and what we should do is just make things go better until that point, or help us get to that point faster.”
I don’t know if anyone confidently holds that a view quite that extreme. But I think it’s relatively common to think that there’s a decent chance that something like that is true, and that that’s probably one of the common reasons for people not prioritising “longtermist interventions”.
Personally, I think that believing there’s a decent chance that something like that is true probably makes sense. However, I currently believe it’s sufficiently likely that we’re at something like a hinge of history, where that march of progress could be foiled, that longtermist work makes sense. And I also believe we can reach a similar conclusion from the idea that, even if we avoid x-risks and bad lock-in, we may not be guaranteed to reach an optimal point “by default” (e.g., maybe moral circles won’t expand far enough, or we’ll get stuck in some bad equilibria), so longtermist “trajectory change” work could be valuable.
(My point here is more to try to highlight some views than to argue for or against them.)
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
I think there’s a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., we’ll develop very good AI—doesn’t necessarily have to be agenty or superintelligent—and that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesn’t seem very distinct to me from the other points. Basically, you might think we’ll ultimately reach a fairly optimal state, so speeding things up won’t change that, but it’ll change how much suffering/joy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe I’m not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other “trajectory change” efforts. I’ll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ‘buy some QALYS now before the singularity’ and the other seems like ‘make the singularity happen sooner’ (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I’m not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to—looking forward to reading it.