Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it’s more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important?
Thus, what subjects should we work/learn in for s-risks?
I’ve thought of working on “cognitive science”, because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...)
Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?
In AI area, is it AI sentience or AI safety for humans more important?
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
Thus, what subjects should we work/learn in for s-risks?
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.
Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it’s more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important? Thus, what subjects should we work/learn in for s-risks? I’ve thought of working on “cognitive science”, because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...) Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.