So, if I understand correctly, the central claim is that: if naturalism is true and we make a “Scientist AI” whose initial goal is to gain knowledge and which can change its goals, then the AI will be aligned. Is that accurate?
I think this is dangerously wrong. Even if the AI comes to gain perfect knowledge of morality for humans (either because naturalism is true, or because it reads about it on human-written books), there is no guarantee that it will then try to act as it is moral. Why does the orthogonality thesis not apply? Why would the AI not disregard morality and act in its self-interest, as many humans actually do?
(EDIT: from further reading, it seems that moral realism does reject the orthogonality thesis. To this I say: what about psychopaths?)
It is extremely implausible that an AI that can discover moral facts will be aligned by default, given the existence of so many humans that are simply not. That is still, assuming that moral realism (which I’m assuming is similar to naturalism) is true.
What you wrote about the central claim is more or less correct: I actually made only an existential claim about a single aligned agent, because the description I gave is sketchy and really far from the more precise algorithmic level of description. This single agent probably belongs to a class of other aligned agents, but it seems difficult to guess how large this class is.
That is also why I have not given a guarantee that all agents of a certain kind will be aligned.
Regarding the orthogonality thesis, you might find 1.2 in Bostrom’s 2012 paper interesting. He writes that objective and intrinsically motivating moral facts need not undermine the orthogonality thesis, since he is using the term “intelligence” as “instrumental rationality”. I add that there is also no guarantee that the orthogonality thesis is correct :)
About psychopaths and metaethics, I haven’t spent a lot of time on that area of research. Like other empirical evidence, it doesn’t seem easy to interpret.
So, if I understand correctly, the central claim is that: if naturalism is true and we make a “Scientist AI” whose initial goal is to gain knowledge and which can change its goals, then the AI will be aligned. Is that accurate?
I think this is dangerously wrong. Even if the AI comes to gain perfect knowledge of morality for humans (either because naturalism is true, or because it reads about it on human-written books), there is no guarantee that it will then try to act as it is moral. Why does the orthogonality thesis not apply? Why would the AI not disregard morality and act in its self-interest, as many humans actually do?
(EDIT: from further reading, it seems that moral realism does reject the orthogonality thesis. To this I say: what about psychopaths?)
It is extremely implausible that an AI that can discover moral facts will be aligned by default, given the existence of so many humans that are simply not. That is still, assuming that moral realism (which I’m assuming is similar to naturalism) is true.
What you wrote about the central claim is more or less correct: I actually made only an existential claim about a single aligned agent, because the description I gave is sketchy and really far from the more precise algorithmic level of description. This single agent probably belongs to a class of other aligned agents, but it seems difficult to guess how large this class is.
That is also why I have not given a guarantee that all agents of a certain kind will be aligned.
Regarding the orthogonality thesis, you might find 1.2 in Bostrom’s 2012 paper interesting. He writes that objective and intrinsically motivating moral facts need not undermine the orthogonality thesis, since he is using the term “intelligence” as “instrumental rationality”. I add that there is also no guarantee that the orthogonality thesis is correct :)
About psychopaths and metaethics, I haven’t spent a lot of time on that area of research. Like other empirical evidence, it doesn’t seem easy to interpret.