Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?
I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.
Fiction is not designed to be maximally truth-revealing. Its function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn’t necessarily. Sometimes, fiction is effective because it affirms cultural beliefs/mores especially well (which makes it seem very true and noble). But that means it’s often (though certainly not always) a reflection of its time (it’s often easy, for example, to see how fiction from the past affirmed now-outdated beliefs about gender and race). So messages in fiction are not always true.
Fiction has a lot of qualities that bias the audience in specific useful ways that don’t relate to truth. For example, it’s often beautiful, high-status, and designed to play on emotions. That means that relative to a similar non-fictional but true thing, it may seem more convincing, even when the reasoning is equally or less sound. So messages in fiction are especially powerful.
For example, I think the Giver reflect the predominant (but implicit) belief of our time and culture, that intense happiness is necessarily linked to suffering, and that attempts to build utopias generally fail in obvious ways by arbitrarily excluding our most important values. Iirc, the folks in the Giver can’t love. Love is one of our society’s highest values; not loving is a clear sign they’ve gone wrong. But the story doesn’t explain why love had to be eliminated to create peace, it just establishes a connection in the readers’ minds without providing any real evidence.
Consider further that if it was true that extreme bad wasn’t a necessary cost of extreme good, we would probably still not have a lot of fiction reflecting that truth. This is simply because fiction about everything going exceedingly well for extended periods of time would likely be very boring for the reader (wonderful for the characters, if they experienced it). People would not read that fiction. Perhaps if you made them do so they would project their own boredom onto the story, and say the story is bad because it bored them. This is a fine policy for picking your entertainment, but a dangerous habit to establish if you’re going to be deciding real-world policy on others’ behalf.
I agree that it’s dangerous to generalize from fictional evidence, BUT I think it’s important not to fall into the opposite extreme, which I will now explain...
Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare “This is what morality is!” and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).
The danger, which I think is the opposite danger to the one you identified, is that people “bite the bullet” and say “I’m sticking with my principles. I guess what seems abhorrent isn’t abhorrent after all; I guess what seems good isn’t good after all.”
In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.
In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it’s almost certainly not simple enough to fit on a t-shirt.
A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don’t hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.
[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn’t generalizing based on fictional evidence, it was rejecting a generalization.]
I disagree that biting the bullet is “almost always a mistake”. In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.
Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people’s current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people’s intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.
And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you’re a moral realist, it almost always doesn’t make sense to change your principles if you believe that your principles are true.
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they’ve usually thought about the situation more carefully than people who just go with their intuition. I’m not saying people should just go with their intuition.
I’m saying that we don’t have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn’t lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won’t work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts—this bias is not irrational.)
You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.
Now people can say “but look at Clockwork Orange!” as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it’s in line with their preexisting beliefs and ethics.
I had fun talking with you, so I googled your username. :O
Thank you for all the inspirational work you do for EA! You’re a real-life superhero! I feel like a little kid meeting Batman. I can’t believe you took the time to talk to me!
Touche’. I concede, but I just want to reiterate that fiction “can and often does involve revealing important truths,” so that I am not haunted by the ghost of Joseph Campbell.
I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.
Fiction is not designed to be maximally truth-revealing. Its function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn’t necessarily. Sometimes, fiction is effective because it affirms cultural beliefs/mores especially well (which makes it seem very true and noble). But that means it’s often (though certainly not always) a reflection of its time (it’s often easy, for example, to see how fiction from the past affirmed now-outdated beliefs about gender and race). So messages in fiction are not always true.
Fiction has a lot of qualities that bias the audience in specific useful ways that don’t relate to truth. For example, it’s often beautiful, high-status, and designed to play on emotions. That means that relative to a similar non-fictional but true thing, it may seem more convincing, even when the reasoning is equally or less sound. So messages in fiction are especially powerful.
For example, I think the Giver reflect the predominant (but implicit) belief of our time and culture, that intense happiness is necessarily linked to suffering, and that attempts to build utopias generally fail in obvious ways by arbitrarily excluding our most important values. Iirc, the folks in the Giver can’t love. Love is one of our society’s highest values; not loving is a clear sign they’ve gone wrong. But the story doesn’t explain why love had to be eliminated to create peace, it just establishes a connection in the readers’ minds without providing any real evidence.
Consider further that if it was true that extreme bad wasn’t a necessary cost of extreme good, we would probably still not have a lot of fiction reflecting that truth. This is simply because fiction about everything going exceedingly well for extended periods of time would likely be very boring for the reader (wonderful for the characters, if they experienced it). People would not read that fiction. Perhaps if you made them do so they would project their own boredom onto the story, and say the story is bad because it bored them. This is a fine policy for picking your entertainment, but a dangerous habit to establish if you’re going to be deciding real-world policy on others’ behalf.
I agree that it’s dangerous to generalize from fictional evidence, BUT I think it’s important not to fall into the opposite extreme, which I will now explain...
Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare “This is what morality is!” and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).
The danger, which I think is the opposite danger to the one you identified, is that people “bite the bullet” and say “I’m sticking with my principles. I guess what seems abhorrent isn’t abhorrent after all; I guess what seems good isn’t good after all.”
In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.
In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it’s almost certainly not simple enough to fit on a t-shirt.
A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don’t hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.
[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn’t generalizing based on fictional evidence, it was rejecting a generalization.]
I disagree that biting the bullet is “almost always a mistake”. In my view, it often occurs after people have reflected on their moral intuitions more closely than they otherwise would have. Our moral intuitions can be flawed. Cognitive biases can get in the way of thinking clearly about an issue.
Scientists have shown, for instance, that for many people, their intuitive rejection of entering the Experience Machine is due to the status quo bias. If people’s current lives were being lived inside an Experience Machine, 50% of people would want to stay in the Machine even if they could instead live the lifestyle of a multi-millionaire in Monaco. Similarly, many people’s intuitive rejection of the Repugnant Conclusion could be due to scope insensitivity.
And, revising our principles to accommodate the new evidence may lead to inconsistencies in our principles. Also, if you’re a moral realist, it almost always doesn’t make sense to change your principles if you believe that your principles are true.
I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they’ve usually thought about the situation more carefully than people who just go with their intuition. I’m not saying people should just go with their intuition.
I’m saying that we don’t have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn’t lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won’t work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts—this bias is not irrational.)
Makes sense. Ethics—like spirituality—seems far too complicated too have a simple set of rules.
You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.
Now people can say “but look at Clockwork Orange!” as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it’s in line with their preexisting beliefs and ethics.
I had fun talking with you, so I googled your username. :O
Thank you for all the inspirational work you do for EA! You’re a real-life superhero! I feel like a little kid meeting Batman. I can’t believe you took the time to talk to me!
That’s deeply kind of you to say, and the most uplifting thing I’ve heard in a while. Thank you very much.
Touche’. I concede, but I just want to reiterate that fiction “can and often does involve revealing important truths,” so that I am not haunted by the ghost of Joseph Campbell.