AI safety’s focus would probably shift significantly, too, and some of it may already be of questionable value on person-affecting views today. I’m not an expert here, though.
I’ve heard the claim that optimal approaches to AI safety may depend on one’s ethical views, but I’ve never really seen a clear explanation how or why. I’d like to see a write-up of this.
Granted I’m not as read up on AI safety as many, but I’ve always got the impression that the AI safety problem really is “how can we make sure AI is aligned to human interests?”, which seems pretty robust to any ethical view. The only argument against this that I can think of is that human interests themselves could be flawed. If humans don’t care about say animals or artificial sentience, then it wouldn’t be good enough to have AI aligned to human interests—we would also need to expand humanity’s moral circle or ensure that those who create AGI have an expanded moral circle.
I’ve heard the claim that optimal approaches to AI safety may depend on one’s ethical views, but I’ve never really seen a clear explanation how or why. I’d like to see a write-up of this.
Granted I’m not as read up on AI safety as many, but I’ve always got the impression that the AI safety problem really is “how can we make sure AI is aligned to human interests?”, which seems pretty robust to any ethical view. The only argument against this that I can think of is that human interests themselves could be flawed. If humans don’t care about say animals or artificial sentience, then it wouldn’t be good enough to have AI aligned to human interests—we would also need to expand humanity’s moral circle or ensure that those who create AGI have an expanded moral circle.
I would recommend CLR’s and CRS’s writeups for what more s-risk-focused work looks like:
https://longtermrisk.org/research-agenda
https://www.alignmentforum.org/posts/EzoCZjTdWTMgacKGS/clr-s-recent-work-on-multi-agent-systems
https://centerforreducingsuffering.org/open-research-questions/ (especially the section Agential s-risks)