In AI area, is it AI sentience or AI safety for humans more important?
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
Thus, what subjects should we work/learn in for s-risks?
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.