Tl;dr As far as you know, you’re the only person in the world directly working on how to build AI that’s capable of making moral progress i.e. thinking critically about goals as humans do.
(I find this pretty surprising and worrying so wanted to highlight.)
Maybe “only person in the world” is a bit excessive :)
As far as I know, no one else in AI safety is directly working on it. There is some research in the field of machine ethics, about Artificial Moral Agents, that has a similar motivation or objective. My guess is that, overall, very few people are working on this.
I dunno, I still think my summary works. (To be clear, I wasn’t trying to be like, “You must be exaggerating, tsk tsk,”—I think you’re being honest and for me it’s the most important part of your post so I wanted to draw attention to it.)
Tl;dr As far as you know, you’re the only person in the world directly working on how to build AI that’s capable of making moral progress i.e. thinking critically about goals as humans do.
(I find this pretty surprising and worrying so wanted to highlight.)
Maybe “only person in the world” is a bit excessive :)
I dunno, I still think my summary works. (To be clear, I wasn’t trying to be like, “You must be exaggerating, tsk tsk,”—I think you’re being honest and for me it’s the most important part of your post so I wanted to draw attention to it.)
Thank you!