I think it rests a lot on conditional value, and that is very unsatisfactory from a simple moral perspective of wanting to personally survive and have my friends and family survive. If extinction risk is high, and near (and I think it is!) we should be going all out to prevent it (i.e. pushing for a global moratorium on ASI). We can then work out the other issues once we have more time to think about them (rather than hastily punting on a long shot of surviving just because it appears higher EV now).
We can then work out the other issues once we have more time to think about them
Fin and I talk a bit about the “punting” strategy here.
I think it works often, but not in all cases.
For example the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover. Because some humans are coming with loads of power, already, and the amount of strategic intelligence you need to take over, if you already have loads of power, is less than the strategic capability you need if you’re starting off with almost none (which will be true of the ASI).
This seems like a predictive difference about AI trajectories and control, rather than an ethical debate. Does that seem correct to you (and/or to @Greg_Colbourn ⏸️ ?)
the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover.
I don’t think it comes meaningfully earlier. It might only be a few months (an AI capable of doing the work of a military superpower would be capable of doing most work involved in AI R&D, precipitating an intelligence explosion). And the humans wielding the power will lose it to the AI too, unless they halt all further development of AI (which seems unlikely, due to hubris/complacency, if nothing else).
starting off with almost none (which will be true of the ASI)
Any ASI worthy of the name would probably be able to go straight for an unstoppable nanotech computronium grey goo scenario.
I think it rests a lot on conditional value, and that is very unsatisfactory from a simple moral perspective of wanting to personally survive and have my friends and family survive. If extinction risk is high, and near (and I think it is!) we should be going all out to prevent it (i.e. pushing for a global moratorium on ASI). We can then work out the other issues once we have more time to think about them (rather than hastily punting on a long shot of surviving just because it appears higher EV now).
Fin and I talk a bit about the “punting” strategy here.
I think it works often, but not in all cases.
For example the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover. Because some humans are coming with loads of power, already, and the amount of strategic intelligence you need to take over, if you already have loads of power, is less than the strategic capability you need if you’re starting off with almost none (which will be true of the ASI).
This seems like a predictive difference about AI trajectories and control, rather than an ethical debate. Does that seem correct to you (and/or to @Greg_Colbourn ⏸️ ?)
Yeah, I think a lot of the overall debate—including what is most ethical to focus on(!) -- depends on AI trajectories and control.
I don’t think it comes meaningfully earlier. It might only be a few months (an AI capable of doing the work of a military superpower would be capable of doing most work involved in AI R&D, precipitating an intelligence explosion). And the humans wielding the power will lose it to the AI too, unless they halt all further development of AI (which seems unlikely, due to hubris/complacency, if nothing else).
Any ASI worthy of the name would probably be able to go straight for an unstoppable nanotech computronium grey goo scenario.