I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.