I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.