Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it’s more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important?
Thus, what subjects should we work/learn in for s-risks?
I’ve thought of working on “cognitive science”, because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...)
Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?
In AI area, is it AI sentience or AI safety for humans more important?
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
Thus, what subjects should we work/learn in for s-risks?
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.
Lukas Gloor at the Center on Long-Term Risk (CLR) wrote a forum post on cause prioritization for s-risks which you might find informative.
CLR argues that suffering-focused EAs should prioritize influencing AI. Their priority areas within that field include:
Multi-agent systems
AI governance
Decision theory and formal epistemology
Risks from malevolent actors
Cause prioritization and macrostrategy related to s-risks
Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it’s more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important? Thus, what subjects should we work/learn in for s-risks? I’ve thought of working on “cognitive science”, because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...) Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?
Downside-focused views typically emphasize the moral importance of reducing risks of astronomical suffering over AI safety for humans. It’s argued that future AIs are likely to have greater total moral weight than future biological beings, due to their potential to pack more sentience per unit of volume, and relative ease of replication.
CLR argues that influencing the trajectory of AI in directions which reduce the risks of astronomically bad outcomes is an especially important priority. I personally worry that we put too much emphasis on aligning AI to “modern human values” and not to more future-proof ethics.
I think you’d find Alex Turner and Quintin Pope’s post on this topic very helpful.