tl;dr: We shouldn’t expect normative perfection of people in general, or pretend to it ourselves. It’s OK to be suboptimal: willingness to recognize the state is the first step to (sometimes) doing even better. (But still, be OK with the fact that you’ll never be perfect.)
[Probably old news for most of this audience, but it’s sometimes helpful to reiterate this sort of thing.]
Intro
A curious feature of human nature is that we’re very psychologically invested in seeing ourselves as good. Teaching applied ethics, it’s striking how resistant students often are to any hint of moral self-critique. Meat-eaters will come up with the most transparently absurd rationalizations for disregarding all the torture that goes into producing their favorite meals. Some philosophers deny that having kids is good, simply because they’re scared of the (non-)implication that people ought to have more kids.[1] And there’s obviously plenty of motivated reasoning underlying dismissals of effective altruism in the public sphere. I wish we were all more OK with just admitting moral imperfection and openly admiring others who do more good than we, in various respects.
The role of social norms
Part of the issue seems to be that people feel strong social pressure not to admit to any divergence between what they (even ideally?) ought to do and what they actually do.[2] Anti-hypocrisy norms seem especially damaging here: there’s a sense that it’s better to have low standards than to have high standards that you struggle to always meet, even if your actual behavior is better (in absolute terms) in the latter case. In teaching, I try to push back against this by modelling tolerance of moral mistakes: “I still eat meat sometimes, even though I don’t really think it’s justifiable.” My hope is that this helps to create a learning environment where students can be more honest with themselves—a sense of, “Oh, good, we don’t have to pretend anymore.” Otherwise, there can be an atmosphere of defensiveness when discussing such topics, as people wonder whether they are going to be subject to attack for their personal decisions. That obviously isn’t conducive to open-minded inquiry.
So I think it can be valuable to create a kind of “safe space” for moral mediocrity, and that this can even be the first step in encouraging people to appreciate that they could do better (and might even feel better about themselves if they did). In general, I think it’s hard for moral motivations to win out over conformity and immediate gratification, so the most reliable way to do better is probably to develop a community of people with shared high standards. (That’s something I find very valuable about the EA community, for example.) It’s often easier to advocate that “we all” should do something valuable (pay higher taxes, eat vegan, tithe 10% to effective charities) than to do it unilaterally, when no-one else around you is doing the same.
As a result, I think it makes sense to be pretty tolerant of people in different social circumstances who are just conforming to their local norms. But I’m inclined to take a stricter stance when it comes to intellectual demands: everyone should acknowledge moral truths, even when they struggle to live up to them. Even though I eat meat, I can certainly acknowledge that veganism is better, and celebrate when a community successfully shifts its norms to make going vegan easier. And I think this is basically the stance that people who don’t donate to effective charities should have towards effective altruism, for example. It’s fine (not great, but fine) if you prefer to spend your money on yourself. We all do, to some degree. But that’s no excuse for opposing effective philanthropy. Just be honest.
Philosophical Cover
A lot of anti-beneficentric normative theorizing strikes me as the worst kind of cope. It’s, like, systematized motivated reasoning, aimed at securing the result that your everyday actions are completely normatively optimal. It’s absurd, and I don’t understand why anyone takes it seriously.
Consider, for example, the demandingness objection to consequentialism. Many philosophers are like, “Of course we couldn’t really have decisive reason to prioritize saving children’s lives over taking intercontinental vacations every summer. What a claim!”
Many treat it as a pre-theoretic datum that most everyday acts are fully justified, in the sense that people are always doing what they have most all-things-considered reason to do. When we fail to do what’s morally optimal, that’s just because we have sufficient non-moral reason to give thousands or even millions of times more weight to our own interests. Or something like that.[3]
I don’t understand why anyone would take that as a datum. It doesn’t seem true to me, even just reflecting on my own decisions. I am constantly making suboptimal decisions, by any reasonable standard (moral, prudential, whatever). Force of habit is extremely strong. ‘Ugh fields’ and anxiety can prevent me from taking the first steps or even thinking about a task that would be well worth completing. There are things I intellectually recognize the value of, but just don’t emotionally care about, and it’s very hard to be motivated by such purely abstract values. But most of all, there’s just a very small space of possibilities that I regard as “live options”—things I will actually seriously consider doing—even though I don’t for a moment believe that this cozy, familiar space contains all the genuinely worthwhile options. Willpower and executive functioning are scarce cognitive resources; it’s entirely inevitable that we run much of our lives on auto-pilot, and there’s no general reason to expect optimal calibration here.
Maybe I’m unusually irrational,[4] but I sure don’t get the impression that other people are superlatively reasonable and wise. Many people are terrible with money.[5] More generally, it seems like everyone struggles with various forms of weakness of will and (often) moral confusion, prioritizing immediate gratification over greater future goods (hyperbolic discounting), prioritizing smaller salient values over larger less-salient ones, and so on. Given these familiar facts about human psychology, it just seems entirely to be expected that we will routinely fail to do what we have most reason to do.
Our ethical theories should reflect this expectation. Indeed, one of the main purposes of a moral theory is to help us (in conjunction with relevant social science) to identify likely sources of normative error, or ways we are apt to go wrong. The forces we learn about from evolutionary and social psychology are obviously not perfectly aligned with any reasonable account of what we really have most reason to care about. So anyone who expects normative error to be rare simply cannot be thinking clearly. We should expect high moral returns from taking back the wheel and carefully surveying the terrain for neglected opportunities. (But we should not expect that any such effort will be sufficient to avoid all normative mistakes. Nor should we be excessively bothered by this basic fact of life.)
Living with Imperfection
As I’ve written previously, we should just be honest about the fact that our choices aren’t always perfectly justified. That’s not ideal, but nor is it the end of the world. It’s OK to be flawed—everyone else is too. We can all celebrate incremental improvements, and uphold norms to prevent moral backsliding (to severely below-average behavior).
The alternative is to indulge in the collective fantasy that everything we do is already ideal—that our everyday acts of selfishness and short-sightedness are actually what we have most reason to do—because our immediate narrow interests are allegedly just so much more “worth” acting upon.
If you don’t think about it too much, maybe you can get away with the fantasy. But I don’t think it survives scrutiny. And I think there’s a lot to be said for intellectual honesty, facing hard truths, and muddling through as best we can (given our myriad cognitive and motivational limitations).
As Scott Alexander wrote in ‘Nobody is Perfect, Everything is Commensurable’:
Nobody is perfect. This gives us license not to be perfect either. Instead of aiming for an impossible goal, falling short, and not doing anything at all, we set an arbitrary but achievable goal designed to encourage the most people to do as much as possible. That goal is ten percent.
Everything is commensurable. This gives us license to determine exactly how we fulfill that ten percent goal. Some people are triggered and terrified by politics. Other people are too sick to volunteer. Still others are poor and cannot give very much money. But money is a constant reminder that everything goes into the same pot, and that you can fulfill obligations in multiple equivalent ways. Some people will not be able to give ten percent of their income without excessive misery, but I bet thinking about their contribution in terms of a fungible good will help them decide how much volunteering or activism they need to reach the equivalent.
Avoiding Villainy
One non-arbitrary threshold is the distinction between having your life contribute positively vs negatively to the world as a whole. I think it makes sense to be especially concerned to ensure that one’s existence turns out to be a good thing on the whole. I also think this is very easy to achieve. Yet there’s a significant risk that most people are currently on track to fail here, simply due to how extraordinarily bad factory-farming is (and how each additional meat-eater contributes to increasing demand for factory farming).
As long as you’re not a criminal, your everyday actions are probably net-positive for humanity. (If you’re worried about environmental impact, consider offsetting with a donation to an effective climate organization like Clean Air Task Force.) Most jobs create value for others; your personal interactions are hopefully overall to the good; and if you have kids, you’re doing the essential work of keeping civilization going. Some people with immense (political, cultural, or economic) power abuse it badly in ways that make me regret their existence, but I doubt any of them are reading this blog. So I’m going to go out on a limb and say I’m glad that you exist, dear reader.
But there is a real risk that you cause a lot of harm to non-human animals. I’m not sure of the precise details, but a typical American diet could be so bad for animals that it outweighs all the good in your life, and the good that you do for others. It’s a scary thought!
There are two obvious ways to avoid this risk of outright villainy:
(1) Go vegan, or
(2) Donate sufficiently to effective animal charities to offset the harm done by your diet.[6] (I’m guessing a couple hundred dollars a year would likely do the trick?)
Once you do that, and can reasonably expect your life impact to be in the green, I think you should feel good about your existence. But you needn’t stop there. I like The🔸10% Pledge because taking it almost guarantees that your life impact will be incredibly awesome, which is even better than “not villainous”! And it’s not even hard! But it’s your life—*shrug*—use your own judgment.
- ^
Compare: ought you to donate a kidney? Only in the sense that it would be a great thing to do. Not that it should be regarded as morally obligatory.
- ^
Sometimes this can result in quite implausible claims about what one would do in high-stakes circumstances: “Oh, I would absolutely throw myself in front of the trolley to save five, if I were the one on the footbridge.” Really? Remind me how many healthy kidneys you have right now?
- ^
Compare my previous discussion of “rationalist” conceptions of permissibility.
- ^
Probably true in some specific respects (e.g. social anxiety).
- ^
I’m really baffled by how many people hate their jobs and yet spend money very wastefully. Why not FIRE?
- ^
Or take other actions that are even better in expectation.
I checked this post because I think I’m among those who could benefit from reading about why ‘imperfection is okay’, even though I’ve seen the saying before.
After reading your post fully, it doesn’t seem like I’m the intended audience. (oh well. maybe this will be of interest to you or other EA forum readers anyways.)
I don’t feel bad about my existence, to be clear. I also don’t feel good about it, even though I’m more ‘in the green’ in expectation than nearly all humans via longtermist (AI alignment) research.
As Eliezer writes, which I continually quote:
If I only have an impact that’s better-than-that-of-nearly-any-other-human, the quantity of tragedy in the world will remain around the same huge number.
This is not a situation where “my decision is correlated with that of enough other agents, that I/we just need to be better-than-had-I/we-not-existed to have a scale-shifting collective impact”.
A metaphor I like is the situation of Eliezer’s Hirou after being Isekai’d into the world of The Sword of Good, where the non-human sentient orc species is being abused and enslaved. Sure, Hirou could just not participate in that, make a few effective donations, and otherwise be inactive in that world, and they’d be net-positive compared to if they didn’t exist. But because they also uniquely have the ability to complete the ritual to end all the tragedy in that world.. shouldn’t they do that instead? Isn’t aiming for anything less ‘not good enough’, given their unique situation?
This is ~how I and maybe some other alignment researchers feel about our actual situations. It’s also a much more demanding standard, because alignment is hard, we can’t just magically complete the ritual like Hirou does. It’s a standard I’m probably not yet doing good enough under, but I think it’s the correct one for me to try to follow, because anything less fails to save the world.
(I think the standard of “Don’t be net-negative,” or perhaps “donate some excess income such that you’re left with good living standards but still do an extraordinary quantity of good” are good standards for most humans, to be clear.)
(Also, disclaimer for some who might need it: “what level of impact do I consider ‘good enough’ for myself” has to be subjectively-chosen. Our values tell the full picture of what actions we’d consider more or less good, in which there is no technical ‘good enough threshold’. For many, judging themselves to be not-good-enough could reduce their effectiveness and so they should choose a less demanding standard, since it is after all a choice.)
More wisdom from Eliezer (from a quote I found via Nevin’s comment):
I first learned this lesson in my youth when, after climbing to the top of a leaderboard in a puzzle game I’d invested >2k hours into, I was surpassed so hard by my nemesis that I had to reflect on what I was doing. Thing is, they didn’t just surpass me and everybody else, but instead continued to break their own records several times over.
Slightly embarrassed by having congratulated myself for my merely-best performance, I had to ask “how does one become like that?”
My problem was that I’d always just been trying to get better than the people around me, whereas their target was the inanimate structure of the problem itself. When I had broken a record, I said “finally!” and considered myself complete. But when they did the same, they said “cool!”, and then kept going. The only way to defeat them, would be by not trying to defeat them, and instead focus on fighting the perceived limits of the game itself.
To some extent, I am what I am today, because I at one point aspired to be better than Aisi.
To be clear, I’m all in favor of aiming higher! Just suggesting that you needn’t feel bad about yourself if/when you fall short of those more ambitious goals (in part, for the epistemic benefits of being more willing to admit when this is so).
Executive summary: We should be more accepting of moral imperfection in ourselves and others, while still striving to do better and acknowledging moral truths even when we struggle to live up to them.
Key points:
People often resist moral self-critique and rationalize unethical behavior due to psychological investment in seeing themselves as good.
Creating a “safe space” for moral mediocrity can encourage honesty and openness to improvement.
Philosophical theories that justify everyday selfish actions as optimal are misguided; we should expect frequent normative errors.
It’s important to ensure one’s life has a net positive impact, which can be achieved through veganism or offsetting harm with donations.
The author recommends The 10% Pledge as a way to have an incredibly positive life impact.
We should face hard truths about our moral shortcomings while still celebrating incremental improvements.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.