Is the idea that most of the opportunities to do good will be soon (say in the next 100-200 years)? Eg. because we expect less poverty, and factory farms etc. Or because the AI is gonna come and make us all happy, so we should just make the bit before that good?
I think thereās a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., weāll develop very good AIādoesnāt necessarily have to be agenty or superintelligentāand that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
Distinct from that seems āmake us get to that point fasterā (Iām imagining this could mean things like increasing growth/ācreating friendly AI/āspreading good values) - that seems very much like looking to long-term effects.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesnāt seem very distinct to me from the other points. Basically, you might think weāll ultimately reach a fairly optimal state, so speeding things up wonāt change that, but itāll change how much suffering/ājoy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe Iām not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other ātrajectory changeā efforts. Iāll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ābuy some QALYS now before the singularityā and the other seems like āmake the singularity happen soonerā (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course Iām not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk toālooking forward to reading it.
I think thereās a decent number of people who give a decent amount of credence to either or both of those possibilities. (I guess I count myself among such people, but also feel wary about having high confidence in those claims, and I see it as very plausible progress will be disrupted in various ways.) People may also believe the first thing because the believe the second thing; e.g., weāll develop very good AIādoesnāt necessarily have to be agenty or superintelligentāand that will allow us to either suddenly or gradually-but-quickly eliminate poverty, develop clean meat, etc.
One way speeding things up is distinct is that it also helps with allowing us to ultimately access more resources (the astronomical waste type argument). But it mostly doesnāt seem very distinct to me from the other points. Basically, you might think weāll ultimately reach a fairly optimal state, so speeding things up wonāt change that, but itāll change how much suffering/ājoy there is before we get to that state. This sort of idea is expressed in the graph on the left here.
So I feel like maybe Iām not understanding that part of your comment?
(I should hopefully be publishing a post soon disentangling things like existential risk reduction, speed-ups, and other ātrajectory changeā efforts. Iāll say it better there, and give pretty pictures of my own :D)
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ābuy some QALYS now before the singularityā and the other seems like āmake the singularity happen soonerā (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course Iām not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk toālooking forward to reading it.