I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.”
It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this.
But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious.
Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down.
I felt a bit suspicious, looking at how I spent my time. Surely that long road trip wasn’t necessary to avoid misery? Did I really need to spend several weekends in a row building a ridiculous LED laser maze, when my other side project was talking to young synthetic biologists about ethics?
I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering, then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are.
Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doing the right thing.
Minimization is greedy. You don’t get to celebrate that you’ve gained an hour a day [from sleeping seven instead of eight hours], or done something impactful this week, because that minimizing urge is still looking at all your unclaimed time, and wondering why you aren’t using it better, too.
How important is my own celebration, though, when you really weigh it against what I could be doing with even more time? (This isn’t just abstract impact points; there are other beings whose struggles matter no less than mine do, and fewer frivolities for me could mean relief for them.)
I think where I fundamentally disagree with this post is that, for many people, I don’t think aiming for the minimum puts you close to less than the minimum. Getting to the minimum, much less below it, can be very hard, such that people who aim at it just aren’t in much danger of undershooting. If you find this is not true for yourself, then please do back off from the minimum. But remember that in the counterfactual where you hadn’t tested your limits, you probably would not have gotten close to optimal.
This post includes some saddening anecdotes about people ending up miserable because they tried to optimize all their time for altruism. I don’t want to trivialize their suffering. Yet I can conjure anecdotes in the opposite direction (and the kind of altruism I care about reduces more suffering in expectation). Several of my colleagues seem to work more than the typical job entails, and I don’t have any evidence of the quality of their work being the worse for this. I’ve found that the amount of time I can realistically devote to altruistic efforts is pretty malleable. No, I’m not a machine; of course I have my limits. But when I gave myself permission to do altruistic things for parts of weekends, or into later hours of weekdays, well, I could. “My happiness is not the point,” as Julia said in this post, and while she evidently doesn’t endorse that statement, I do. That just seems to be the inevitable consequence of taking the sentience of other beings besides yourself (or your loved ones) seriously.
Personally have been trying to think of my life only as a means to an end. Will my life technically might have value, I am fairly sure it is rather minuscule compared to the potential impact can make. I think it’s possible, though probably difficult, to intuit this and still feel fine / not guilty, about things. … I’m a bit wary on this topic that people might be a bit biased to select beliefs based on what is satisfying or which ones feel good.
I do think Tessa’s point about slack has some force—though in a sense, this merely shifts the “minimum” up by some robustness margin, which is unlikely to be large enough to justify the average person’s indulgences.
If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my “personal satisfaction” budget, along with money I use for things like getting coffee with friends.
It puzzles me that, as common as concerns about the utility monster—sacrificing the well-being of the many for the super-happiness of one—are, we seem to find it totally intuitive that one can (passively) sacrifice the well-being of the many for one’s own rather mild comforts. (This is confounded by the act vs. omission distinction, but do you really endorse that?)
The latter conclusion is basically the implication of accepting goals other than “make the world as good as possible.” What makes these other goals so special, that they can demand disproportionate attention (“disproportionate” relative to how much actual well-being is at stake)?
Due to the writing style, it’s honestly not clear to me what exactly this post was claiming. But the author does emphatically say that devoting all of their time to the activity that helps more people per hour would be “premature optimization.” And they celebrate an example of a less effective thing they do because it consistently makes a few people happy.
I don’t see how the post actually defends doing the less effective thing. To the extent that you impartially care about other sentient beings, and don’t think their experiences matter any less because you have fewer warm fuzzy feelings about them, what is the justification for willingly helping fewer people?
In Defense of Aiming for the Minimum
I’m not really sympathetic to the following common sentiment: “EAs should not try to do as much good as feasible at the expense of their own well-being / the good of their close associates.”
It’s tautologically true that if trying to hyper-optimize comes at too much of a cost to the energy you can devote to your most important altruistic work, then trying to hyper-optimize is altruistically counterproductive. I acknowledge that this is the principle behind the sentiment above, and evidently some people’s effectiveness has benefited from advice like this.
But in practice, I see EAs apply this principle in ways that seem suspiciously favorable to their own well-being, or to the status quo. When you find yourself trying to justify on the grounds of impact the amounts of self-care people afford themselves when they don’t care about being effectively altruistic, you should be extremely suspicious.
Some examples, which I cite not to pick on the authors in particular—since I think many others are making a similar mistake—but just because they actually wrote these claims down.
1. “Aiming for the minimum of self-care is dangerous”
I think this is just correct. If your argument is that EAs shouldn’t be totally self-effacing because some frivolities are psychologically necessary to keep rescuing people from the bottomless pit of suffering, then sure, do the things that are psychologically necessary. I’m skeptical that “psychologically necessary” actually looks similar to the amount of frivolities indulged by the average person who is as well-off as EAs generally are.
Do I live up to this standard? Hardly. That doesn’t mean I should pretend I’m doing the right thing.
How important is my own celebration, though, when you really weigh it against what I could be doing with even more time? (This isn’t just abstract impact points; there are other beings whose struggles matter no less than mine do, and fewer frivolities for me could mean relief for them.)
I think where I fundamentally disagree with this post is that, for many people, I don’t think aiming for the minimum puts you close to less than the minimum. Getting to the minimum, much less below it, can be very hard, such that people who aim at it just aren’t in much danger of undershooting. If you find this is not true for yourself, then please do back off from the minimum. But remember that in the counterfactual where you hadn’t tested your limits, you probably would not have gotten close to optimal.
This post includes some saddening anecdotes about people ending up miserable because they tried to optimize all their time for altruism. I don’t want to trivialize their suffering. Yet I can conjure anecdotes in the opposite direction (and the kind of altruism I care about reduces more suffering in expectation). Several of my colleagues seem to work more than the typical job entails, and I don’t have any evidence of the quality of their work being the worse for this. I’ve found that the amount of time I can realistically devote to altruistic efforts is pretty malleable. No, I’m not a machine; of course I have my limits. But when I gave myself permission to do altruistic things for parts of weekends, or into later hours of weekdays, well, I could. “My happiness is not the point,” as Julia said in this post, and while she evidently doesn’t endorse that statement, I do. That just seems to be the inevitable consequence of taking the sentience of other beings besides yourself (or your loved ones) seriously.
See also this comment:
I do think Tessa’s point about slack has some force—though in a sense, this merely shifts the “minimum” up by some robustness margin, which is unlikely to be large enough to justify the average person’s indulgences.
2. “You have more than one goal, and that’s fine”
It puzzles me that, as common as concerns about the utility monster—sacrificing the well-being of the many for the super-happiness of one—are, we seem to find it totally intuitive that one can (passively) sacrifice the well-being of the many for one’s own rather mild comforts. (This is confounded by the act vs. omission distinction, but do you really endorse that?)
The latter conclusion is basically the implication of accepting goals other than “make the world as good as possible.” What makes these other goals so special, that they can demand disproportionate attention (“disproportionate” relative to how much actual well-being is at stake)?
3. “Ineffective Altruism”
Due to the writing style, it’s honestly not clear to me what exactly this post was claiming. But the author does emphatically say that devoting all of their time to the activity that helps more people per hour would be “premature optimization.” And they celebrate an example of a less effective thing they do because it consistently makes a few people happy.
I don’t see how the post actually defends doing the less effective thing. To the extent that you impartially care about other sentient beings, and don’t think their experiences matter any less because you have fewer warm fuzzy feelings about them, what is the justification for willingly helping fewer people?