Since you requested responses: I agree with something like: ‘conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened’. But this seems like an odd framing to me:
Even if focusing solely on AI alignment, different actors have varying levels of responsibility for worsening various risk factors or contributing to various safety/security/mitigation between now and the arrival of transformative AI / ASI.
The post asked about AGI. Reaching AGI is not the same as reaching ASI, which is not the same as extinction.
Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
and 4. Sure, alignment isn’t enough, but it’s necessary, and it seems we’re not on track to make even that low bar.
Since you requested responses: I agree with something like: ‘conditional upon AI killing us all and then going on to do things that have zero moral (dis)value, it then matters little who was most responsible for that having happened’. But this seems like an odd framing to me:
Even if focusing solely on AI alignment, different actors have varying levels of responsibility for worsening various risk factors or contributing to various safety/security/mitigation between now and the arrival of transformative AI / ASI.
The post asked about AGI. Reaching AGI is not the same as reaching ASI, which is not the same as extinction.
It seems very possible that humanity could survive but the world could end up as severely net negative. See “The Future Might Not Be So Great”, “s-risks”, and the upcoming EA Forum debate week
In particular, I believe AI alignment is not enough to ensure positive futures. See for example risks of stable totalitarianism, risks from malevolent actors, risks from ideological fanaticism. We can think of ‘human misalignment’ or misuse of AI.
To respond to you points in order:
Sure, but I think of, say, a 5% probability of success and a 6% probability of success as similarly dire enough not to want to pick either.
What we call AGI today, human level at everything as aminimum but running on a GPU, is what Bostrom called speed and/or collective superintelligence, if chip prices and speeds continue to change.
and 4. Sure, alignment isn’t enough, but it’s necessary, and it seems we’re not on track to make even that low bar.