I would express a strong preference for the “AGI going well” framing over something like “aligned superintelligence”, as the latter presupposes a particular view of how AI is going to go that not everyone agrees with. I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence. And then theres the typical question of what “aligned” means: aligned to who or what?
“AGI goes well” is better because it doesn’t presuppose as much: just that we have AGI and humans are doing fine.
I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence.
Interesting. I was imagining that the question would have to be about some sort of locked-in super-intelligence. If we are talking about AGI systems which aren’t drastically affecting the priorities that humanity has for itself, the question seems like a very obvious no (in other words—no AGI won’t be good for animals, or bad for them).
And then theres the typical question of what “aligned” means: aligned to who or what?
You’re right—it’d be frustrating if we just ended up having this debate for a week. That’s what tempts me about “AGI which doesn’t cause human extinction or disempowerment” (though those terms are ambiguous too of course).
I would express a strong preference for the “AGI going well” framing over something like “aligned superintelligence”, as the latter presupposes a particular view of how AI is going to go that not everyone agrees with. I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence. And then theres the typical question of what “aligned” means: aligned to who or what?
“AGI goes well” is better because it doesn’t presuppose as much: just that we have AGI and humans are doing fine.
Interesting. I was imagining that the question would have to be about some sort of locked-in super-intelligence. If we are talking about AGI systems which aren’t drastically affecting the priorities that humanity has for itself, the question seems like a very obvious no (in other words—no AGI won’t be good for animals, or bad for them).
You’re right—it’d be frustrating if we just ended up having this debate for a week. That’s what tempts me about “AGI which doesn’t cause human extinction or disempowerment” (though those terms are ambiguous too of course).