Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
I think there’s a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., we’ll develop very good AI—doesn’t necessarily have to be agenty or superintelligent—and that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesn’t seem very distinct to me from the other points. Basically, you might think we’ll ultimately reach a fairly optimal state, so speeding things up won’t change that, but it’ll change how much suffering/joy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe I’m not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other “trajectory change” efforts. I’ll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ‘buy some QALYS now before the singularity’ and the other seems like ‘make the singularity happen sooner’ (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I’m not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to—looking forward to reading it.
Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
Distinct from that seems ‘make us get to that point faster’ (I’m imagining this could mean things like increasing growth/creating friendly AI/spreading good values) - that seems very much like looking to long-term effects.
I think there’s a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., we’ll develop very good AI—doesn’t necessarily have to be agenty or superintelligent—and that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesn’t seem very distinct to me from the other points. Basically, you might think we’ll ultimately reach a fairly optimal state, so speeding things up won’t change that, but it’ll change how much suffering/joy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe I’m not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other “trajectory change” efforts. I’ll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ‘buy some QALYS now before the singularity’ and the other seems like ‘make the singularity happen sooner’ (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I’m not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to—looking forward to reading it.