Not really knowledgeable, but wasn’t the project of coding values into AI was attempted in some way by machine ethicists? That could serve as a starting point for guessing how much time it should take to specify human values.
I find it interesting that you are alarmed by current non-AI agents/optimization processes. I think that if you take Drexler’s CAIS seriously, that might make that sort of analysis more important.
Not much of a spoiler, but beware—It seems like the possibility of having future civilization living a life that is practically very similar to ours (autonomy, possibility of doing something important, community, food,.. 😇) but just better in almost every aspect is incredible. There are some weird stuff there, some of which are horrible, so I’m not that certain about that.
Regarding intuition of ML for learning faces, I am not sure that this is a great analogy because the module that tries to understand human morality might get totally misinterpreted by other modules. Reward hacking, overfitting and adversarial examples are some things that pop to mind here as ways this can go wrong. My intuition here is that any maximizer would find “bugs” in it’s model of human morality to exploit (because it is complex and fragile).
It seems like your intuition is mostly based on the possibility of self correction, and I feel like that is indeed where a major crux for this question lies.
Some thoughts:
Not really knowledgeable, but wasn’t the project of coding values into AI was attempted in some way by machine ethicists? That could serve as a starting point for guessing how much time it should take to specify human values.
I find it interesting that you are alarmed by current non-AI agents/optimization processes. I think that if you take Drexler’s CAIS seriously, that might make that sort of analysis more important.
I think that Friendship is Optimal’s depiction of a Utopia is relevant here.
Not much of a spoiler, but beware—It seems like the possibility of having future civilization living a life that is practically very similar to ours (autonomy, possibility of doing something important, community, food,.. 😇) but just better in almost every aspect is incredible. There are some weird stuff there, some of which are horrible, so I’m not that certain about that.
Regarding intuition of ML for learning faces, I am not sure that this is a great analogy because the module that tries to understand human morality might get totally misinterpreted by other modules. Reward hacking, overfitting and adversarial examples are some things that pop to mind here as ways this can go wrong. My intuition here is that any maximizer would find “bugs” in it’s model of human morality to exploit (because it is complex and fragile).
That may be a fundamental problem
It seems like your intuition is mostly based on the possibility of self correction, and I feel like that is indeed where a major crux for this question lies.