more or less any level of intelligence could in principle be combined with more or less any final goal.
This seems to me kind of a weird statement of the thesis, “could in principle” being too weak.
If I understand, you’re not actually denying that just about any combination of intelligence and values could in principle occur. As you said, we can take a fact like the truth of evolution and imagine an extremely smart being that’s wrong about that specific thing. There’s no obvious impossibility there. It seems like the same would go for basically any fact or set of facts, normative or not.
I take it the real issue is one of probability, not possibility. Is an extremely smart being likely to accept what seem like glaringly obvious moral truths (like “you shouldn’t turn everyone into paperclips”) in virtue of being so smart?
(2) I was surprised to see you say your case depended completely on moral realism. Of course, if you’re a realist, it makes some sense to approach things that way. Use your background knowledge, right?
But I think even an anti-realist may still be able to answer yes to the question above, depending on how the being in question is constructed. For example, I think something in this anti-orthogonality vein is true of humans. They tend to be constructed so that understanding of certain non-normative facts puts pressure on certain values or normative views: If you improve a human’s ability to imaginatively simulate the experience of living in slavery (a non-moral intellectual achievement), they will be less likely to support slavery, and so on.
This is one direction I kind of expected you to go at some point after I saw the Aaronson quote mention “the practical version” of the thesis. That phrase has a flavor of, “Even if the thesis is mostly true because there are no moral facts to discover, it might still be false enough to save humanity.”
(3) But perhaps the more obvious the truths, the less intelligence matters. The claim about slavery is clearer to me than the claim that learning more about turning everyone into paperclips would make a person less likely to do so. It seems hard to imagine a person so ignorant as to not already appreciate all the morally relevant facts about turning people into paperclips. It’s as if, when the moral questions get so basic, intelligence isn’t going to make a difference. You’ve either got the values or you don’t. (But I’m a committed anti-realist, and I’m not sure how much that’s coloring these last comments.)
2 I agree that it would depend on how the being is constructed. My claim is that it’s plausible that they’d be moral by default just by virtue of being smart.
3 I think there is a sense in which I have—and most modern people have—unlike most people historically, grasped the badness of slavery.
Three things:
This seems to me kind of a weird statement of the thesis, “could in principle” being too weak.
If I understand, you’re not actually denying that just about any combination of intelligence and values could in principle occur. As you said, we can take a fact like the truth of evolution and imagine an extremely smart being that’s wrong about that specific thing. There’s no obvious impossibility there. It seems like the same would go for basically any fact or set of facts, normative or not.
I take it the real issue is one of probability, not possibility. Is an extremely smart being likely to accept what seem like glaringly obvious moral truths (like “you shouldn’t turn everyone into paperclips”) in virtue of being so smart?
(2) I was surprised to see you say your case depended completely on moral realism. Of course, if you’re a realist, it makes some sense to approach things that way. Use your background knowledge, right?
But I think even an anti-realist may still be able to answer yes to the question above, depending on how the being in question is constructed. For example, I think something in this anti-orthogonality vein is true of humans. They tend to be constructed so that understanding of certain non-normative facts puts pressure on certain values or normative views: If you improve a human’s ability to imaginatively simulate the experience of living in slavery (a non-moral intellectual achievement), they will be less likely to support slavery, and so on.
This is one direction I kind of expected you to go at some point after I saw the Aaronson quote mention “the practical version” of the thesis. That phrase has a flavor of, “Even if the thesis is mostly true because there are no moral facts to discover, it might still be false enough to save humanity.”
(3) But perhaps the more obvious the truths, the less intelligence matters. The claim about slavery is clearer to me than the claim that learning more about turning everyone into paperclips would make a person less likely to do so. It seems hard to imagine a person so ignorant as to not already appreciate all the morally relevant facts about turning people into paperclips. It’s as if, when the moral questions get so basic, intelligence isn’t going to make a difference. You’ve either got the values or you don’t. (But I’m a committed anti-realist, and I’m not sure how much that’s coloring these last comments.)
I think 1 is right.
2 I agree that it would depend on how the being is constructed. My claim is that it’s plausible that they’d be moral by default just by virtue of being smart.
3 I think there is a sense in which I have—and most modern people have—unlike most people historically, grasped the badness of slavery.