I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence.
Interesting. I was imagining that the question would have to be about some sort of locked-in super-intelligence. If we are talking about AGI systems which arenât drastically affecting the priorities that humanity has for itself, the question seems like a very obvious no (in other wordsâno AGI wonât be good for animals, or bad for them).
And then theres the typical question of what âalignedâ means: aligned to who or what?
Youâre rightâitâd be frustrating if we just ended up having this debate for a week. Thatâs what tempts me about âAGI which doesnât cause human extinction or disempowermentâ (though those terms are ambiguous too of course).
Interesting. I was imagining that the question would have to be about some sort of locked-in super-intelligence. If we are talking about AGI systems which arenât drastically affecting the priorities that humanity has for itself, the question seems like a very obvious no (in other wordsâno AGI wonât be good for animals, or bad for them).
Youâre rightâitâd be frustrating if we just ended up having this debate for a week. Thatâs what tempts me about âAGI which doesnât cause human extinction or disempowermentâ (though those terms are ambiguous too of course).