I agree that it’s dangerous to generalize from fictional evidence, BUT I think it’s important not to fall into the opposite extreme, which I will now explain...
Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare “This is what morality is!” and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).
The danger, which I think is the opposite danger to the one you identified, is that people “bite the bullet” and say “I’m sticking with my principles. I guess what seems abhorrent isn’t abhorrent after all; I guess what seems good isn’t good after all.”
In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.
In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it’s almost certainly not simple enough to fit on a t-shirt.
A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don’t hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.
[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn’t generalizing based on fictional evidence, it was rejecting a generalization.]
I disagree that biting the bullet is “almost always a mistake”. In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.
Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people’s current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people’s intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.
And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you’re a moral realist, it almost always doesn’t make sense to change your principles if you believe that your principles are true.
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they’ve usually thought about the situation more carefully than people who just go with their intuition. I’m not saying people should just go with their intuition.
I’m saying that we don’t have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn’t lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won’t work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts—this bias is not irrational.)
I agree that it’s dangerous to generalize from fictional evidence, BUT I think it’s important not to fall into the opposite extreme, which I will now explain...
Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare “This is what morality is!” and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).
The danger, which I think is the opposite danger to the one you identified, is that people “bite the bullet” and say “I’m sticking with my principles. I guess what seems abhorrent isn’t abhorrent after all; I guess what seems good isn’t good after all.”
In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.
In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it’s almost certainly not simple enough to fit on a t-shirt.
A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don’t hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.
[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn’t generalizing based on fictional evidence, it was rejecting a generalization.]
I disagree that biting the bullet is “almost always a mistake”. In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.
Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people’s current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people’s intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.
And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you’re a moral realist, it almost always doesn’t make sense to change your principles if you believe that your principles are true.
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they’ve usually thought about the situation more carefully than people who just go with their intuition. I’m not saying people should just go with their intuition.
I’m saying that we don’t have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn’t lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won’t work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts—this bias is not irrational.)
Makes sense. Ethics—like spirituality—seems far too complicated too have a simple set of rules.