Ethical offsetting is antithetical to EA
[My views are my own, not my employer’s. Thanks to Michael Dickens for reviewing this post prior to publication.]
[More discussion here]
Summary
Spreading ethical offsetting is antithetical to EA values because it encourages people to focus on negating harm they personally cause rather than doing as much good as possible. Also, the most favored reference class for the offsets is rather vague and arbitrary.
There are a few positive aspects of using ethical offsets, and situations in which advocating ethical offsets may be effective.
Definition
Ethical offsetting is the practice of undoing harms caused by one’s activities through donations or other acts of altruism. Examples of ethical offsetting include purchasing carbon offsets to make up for one’s carbon emissions and donating to animal charities to offset animal product consumption. More explanation and examples are available in this article.
Against offsetting
I think ethical offsetting is antithetical to EA values, and have three main objections to it.
1) In practice, people doing ethical offsetting use vague and arbitrary reference classes.
2) It’s not the most effectively altruistic thing to do.
3) It spreads suboptimal and non-consequentialist memes/norms about doing good.
1) The reference class people pick for ethical offsets is arbitrary.
For example, let’s say I cause some harm by buying milk that came from a cow that was treated poorly, and I want to negate the harm. I have a bunch of options.
I cannot undo the exact harm done by my purchase once it’s happened, but I could (try to) seek out that specific cow and try to do something nice for her, negating the harm I caused for that specific cow’s utility calculus. I could donate some money to a charity that helps cows, negating my harmful effect on the total utility of cow-kind. I could donate some money to a charity that helps all farmed animals, negating my harmful effect on farmed animal-kind. Or I could donate to whatever charity I thought did the most good per dollar, negating my negative impact on the universe most cost-effectively but less directly.
People seem to settle on a sort of broad cause-area-level offsetting preference (e.g. donating to help farmed animals). While reference class seems intuitive, it’s ultimately arbitrary*.
2) Ethical offsetting isn’t the most effectively altruistic thing.
You should do the things you think are most effectively altruistic, and you should donate to the charities you think are most effective. If you eat dead animals and don’t believe animal charities are the most effective charities, I don’t think you should donate to them.
Like everything else, ethical offsetting has opportunity costs; you could use that money to donate to the best charity, which is often different from the charity you’re using for ethical offsetting. It causes a harm relative to the world where you donate only to the most effective charity.
Even if you think the charity you donate your offsetting money to is the most effective, I don’t think it’s helpful to do ethical offsetting. Much of the suffering in the world isn’t directly caused by anyone, so an offsetting mindset increases the probability that you’ll miss big sources of suffering down the line. It causes a bias towards addressing anthropogenic harms, rather than harms from nature.
3) Ethical offsetting spreads anti-EA memes and norms
Ethical offsetting reinforces a preoccupation with not doing harmful things (instead of not allowing harmful things to happen, and taking action when they do). But EAs should (and usually do) focus on the sufferers, not themselves.
By encouraging others to offset, we set norms oriented around people’s personal behavior. We encourage an inefficient model of charity that involves donating based on one’s activities, not one’s abilities or the needs of charities that help neutralize various harms. We miss the chance to communicate about core EA ideas like cause prioritization and room for more funding by establishing a framework that has little room for them.
There are some other dangers involved in ethical offsetting, although I haven’t seen much evidence they actually occur: Offsetting may also encourage unhealthy scrupulosity about the harms we inevitably contribute to in order to function (although it could also help alleviate anxiety about them). And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them. This could contribute to careless and destructive norms about personal behavior.
Caveats
Offsetting is better than nothing. There may be situations in which ethical offsetting is the biggest plausible ask one can make. In such situations, I think bringing up the idea of ethical offsetting may be appropriate. And it may be an interesting conversation starter about sources of suffering and ways of alleviating them.
I’ve previously discussed my concerns about the obstacles to changing one’s mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed.
It may be really psychologically beneficial for some people, similar to the way donations for the dubiously-named fuzzies (donations for causes that are especially personally meaningful to the donor rather than maximally effective) sometimes are.
I think the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently. I’m most worried about this concern.
*The reference class schelling point is stronger with carbon offsets, where the harmful thing is adding some carbon dioxide to the atmosphere. Carbon dioxide molecules are pretty interchangeable. If you remove as many as you added, you neutralize the harm from your emissions-causing action very directly, which is intuitively appealing.
All suffering may be equally important, but not all forms of harm are the same, or even similar. How similar the harm you offset is to the harm you cause can vary a lot. Few other types of offsetting I’ve heard of allow the opportunity to create a future so similar to the one where the harmful activity had never been done.
- Ask (Everyone) Anything — “EA 101” by Oct 5, 2022, 10:17 AM; 110 points) (
- Here’s where CEA staff are donating in 2023 by Nov 10, 2023, 1:55 PM; 94 points) (
- Hedging against deep and moral uncertainty by Sep 12, 2020, 11:44 PM; 83 points) (
- Good and bad ways to think about downside risks by Jun 11, 2020, 1:38 AM; 19 points) (LessWrong;
- Dec 30, 2023, 9:37 PM; 16 points) 's comment on A year of wins for farmed animals by (
- Feb 13, 2022, 4:33 AM; 11 points) 's comment on Thomas Kwa’s Quick takes by (
- Apr 22, 2022, 8:35 AM; 11 points) 's comment on Free-spending EA might be a big problem for optics and epistemics by (
- Apr 24, 2019, 3:38 AM; 11 points) 's comment on Reasons to eat meat by (
- EA Facebook Group Greatest Hits: Top 50 Posts by Total Reactions by Aug 22, 2018, 4:37 AM; 9 points) (
- Aug 15, 2019, 1:06 PM; 6 points) 's comment on Effective altruism and net-positive living by (
- Dec 8, 2022, 8:14 PM; 6 points) 's comment on The case for offsetting meat consumption by (
- Jan 17, 2019, 10:12 PM; 6 points) 's comment on How can I internalize my most impactful negative externalities? by (
- Jul 24, 2020, 1:52 PM; 6 points) 's comment on Covid offsets and carbon offsets by (
- Sep 5, 2020, 9:33 PM; 4 points) 's comment on AMA: Owen Cotton-Barratt, RSP Director by (
- Jan 29, 2021, 10:45 AM; 3 points) 's comment on Why I’m concerned about Giving Green by (
- Oct 21, 2020, 1:34 PM; 3 points) 's comment on So-Low Growth’s Quick takes by (
- Jun 20, 2016, 2:21 AM; 0 points) 's comment on The morality of having a meat-eating pet by (
I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don’t give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there’s also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that’s taking ethical behavior in your personal life too far. But I also think that it’s possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with “Well, I’m an EA, who cares?” But nobody knows exactly how far is too far vs. not far enough, and EA doesn’t help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg “I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it”, or a literal way “I am actually going to pay 0.01 cents to offset the costs of this shower.”
As such, I think all of your objections to offsetting fall short:
The reference class doesn’t particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you’re only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.
Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.
Ethical offsetting isn’t an “anti-EA meme” any more than “be vegetarian” or “tip the waiter” are “anti-EA memes”. Both involve having some sort of moral code other than buying bednets, but EA isn’t about limiting your morality to buying bednets, it’s about that being a bare minimum. Once you’ve done that, you can consider what other moral interests you might have.
People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That’s okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That’s also okay. As long as they’re not taking it out of the money they’ve pledged to effective charity, it’s not EA’s business whether they want to do that or not, just as it’s not EA’s business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren’t in competition with EA and don’t subvert EA. If anything they contribute to the general desire to build a more moral world.
[written when very tired]
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don’t only do EA things, and I’d agree with you. And you can say people shouldn’t let EA ideas determine all their behaviors, and I’d also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn’t mean sinning isn’t antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they’re merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it’s being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it’s probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it’s a big step back.
That said, I agree with you that:
I think “do as much good as possible” is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it’s counterproductive to define this in terms of “well, I guess they failed at EA, but everyone fails at things, so that’s fine”; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn’t seem very friendly (see also my response to Squark above).
My interpretation of EA is “devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible”. This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don’t think anybody would think of this as “anti-EA”. However, it’s not very efficient—if the calculations I’ve seen around are correct, then despite being a major life choice that seriously limits your food options, it’s worth no more than a $5 − 50 donation to an animal charity. This isn’t “the most effective thing” by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes—it’s part of their personal morality that’s not necessarily subsumed by EA, and it’s not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it’s part of some people’s personal morality, and it’s not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn’t going to EA charity, it’s not hurting EA for the person to give it to offsets instead of, I don’t know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that’s the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending “stop offsetting and become vegetarian” results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending “stop offsetting but don’t become vegetarian” results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn’t seem strictly worse is “stop offsetting and donate the $5 to a charity more effective than the animal charity you’re giving it to now”. But why should we be more concerned about making them give the money they’re already using semi-efficiently to a more effective charity, as opposed to starting with the money they’re spending on clothes or games or something, and having the money they’re already spending pretty efficiently be the last thing we worry about redirecting?
Aren’t you kind of not disagreeing at all here?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn’t have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it’s not the most effective thing to do (with your EA money).
The two claims don’t seem incompatible with each other, unless I’m missing something.
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.
I gave the example of giving 10% to bed nets because that’s an especially clear example of a division between charitable and non-charitable money—eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there’s no cost to EA to giving that to offsets instead. I know many other EAs work this way too.
If you believe this isn’t enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there’s such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you “should have” given to charity.
But if you’re not working off a model where you have to agonize over everything, I’m not sure why you should agonize over offsets.
I don’t think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don’t reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
No. Have you tried it? I have. It works fine for me.
Maybe some people are too addicted to modern comforts or maybe they can’t handle the stress and pity they feel when thinking about charity. Sucks for them, but it’s a pragmatic issue which doesn’t directly change the moral issue.
(Two years later...) I have tried it. It’s a disaster for me. Every time I buy food, I think, “Someone else needs this food more than me,” which is an accurate statement but takes me to a dark place.
This seems hard; sorry to hear about it :-/
For what it’s worth, I’ve found self-laceration like this to be both really bad for my mental health and really bad for my personal efficacy.
Rather. That’s why I’ve donated a set percentage for about a decade now. “Set and forget” direct debits are both easier and more effective than constantly questioning which expenses are strictly necessary and which are luxuries. Budgeting how much goes to charity and how much goes to my expenses also makes it easier to get along with friends and family. “Sorry, that’s not in the budget” is easier than “Sorry, visiting you is less important than deworming strangers’ children.”
It seems straightforward to realize that you need food so that you can go about your business of making the world better. A soldier in WWII did not feel some kind of moral pain at the fact that he was getting more meat in his rations than the civilians back home. To agonize or “self-lacerate” about this common-sense logic is an abnormal pathology which is specific to certain types of people who join EA. So I understand that it doesn’t work for you, but I think that’s not representative of how most people will think, and it’s worth making a real effort to learn to get along with the rational line of thought.
I think characterizing thought-patterns as “abnormal” isn’t helpful for the person you’re addressing, and isn’t good for our community’s discourse.
Especially when the thought-pattern in question is fairly common around these parts.
Also “how most people think” isn’t a good benchmark for “how ought we think.”
Well it is not normal. That’s what abnormal means. I think that the most helpful thing is to tell the truth. I have abnormal thought patterns too, it doesn’t perturb me to recognize it.
No, that is exactly when it is most important to say “hey, this is not a foregone conclusion, you are in a bit of an echo chamber”.
Sure, what is rational is a good benchmark for how we should think, and it’s rational to eschew hard rules about what percentage of your money is luxurious versus what percentage is charitable.
I am using “how most people think” as a good benchmark for how we can think, and what I am pointing out here is that it is possible to adopt the rational way of thinking without going crazy and self-flagellating.
This reads a bit like “hey, I have the same thing you’re having, but it’s not a problem for me. Maybe if you just snapped out of it, it wouldn’t be a problem for you either!”
I think this sort of framing lacks compassion & can exacerbate things.
I don’t follow this; could you expand on it a little?
But I didn’t say “Maybe if you just snapped out of it, it wouldn’t be a problem for you either,” I said it was abnormal.
If you have a better way of framing the same facts, feel free to present it.
Well there isn’t any basis for it, and it contradicts consequentialism, it contradicts deontology, really I can’t think of any framework that says that you should make a budget such that a percentage of your money is a carte blanche gift to you that is independent of the considerations of benevolence and distributive justice. In all sensible moral theories, the needs of others count as a pro tanto reason to donate any amount of your money.
I think a relevant test here is “Is this better than saying nothing at all?”
It conveys the truth, which is a good reason to presume that it is.
“First, is it true? Second, is it kind? Third, is it necessary?”
Yes, yes, and yes. In Scott’s post he defines unkindness as anger or sarcasm—not the use of words like “abnormal” that just tickle us the wrong way.
But, like… What you said made me feel bad and was also unhelpful. I gained nothing from it, and lost a good mood. So why say it?
If you had suggested a useful resource or alternative, I would have thought your comment had merit.
Alternatively, you could have shown compassion by reflecting back what you heard—saying something like, “It sounds like making trade-offs on a daily basis is very emotional for you, so you donate a set percentage to cope. That might be the best solution for you right now. However, that doesn’t mean it’s the best solution for everyone.”
+1 to Khorton.
This could be a good opportunity for kbog to reflect and maybe update.
But I predict that they’ll instead double-down on their position...
Obviously we don’t always make comments that help the other person; your comment, for instance, did not help me at all, because I am 100% content with abolishing the charitable/non-charitable distinction in my budget, and need no help from anyone with figuring it out. Yet you made your comment nonetheless, presumably for the benefit of others, so they might know your experience, or for the benefit of me, just that I might know more about your experience. Likewise, I made my comment for the benefit of anyone else who is reading to persuade them that your experience is atypical, and to persuade you that your experience is atypical.
I didn’t aim to make you feel bad.
But I don’t feel compassion for people just because they have arrived at some kind of existential angst, I feel compassion for people when they have a more severe problem, so if I expressed sorrow here then I’d be dishonest.
I quite clearly said “I understand that it doesn’t work for you.” All you are doing is pleading for more cushions around my words. Such effort would be better spent thinking about whether my statements are correct or not, or just moving on with your life.
Likewise, effort on my part is better spent on other things besides adding such cushions. You clearly said yourself that such decisions are very emotional for you, so it’s obvious to every reader that they are very emotional for you, and if you have a basic level of respect for my reading comprehension abilities then you will presume that I understood your statement that such decisions are very emotional for you, and obviously I did nothing to disagree with that fact—it is, after all, not the sort of thing that can be reasonably disagreed with from a distance. So to merely repeat this obvious fact, which is understood by everyone to be understood by everyone, would be a waste of time.
But I don’t merely believe it’s not the best solution for everyone, I believe it’s the wrong solution for most people, so this would be an inaccurate representation of my position.
An important point. Failing to take this into account comes across as morally narrow.
This is a bit of a side point, but to what extent do EAs actually promote ethical offsetting? It seems to me like it normally gets raised in the following ways:
A dominance argument to show that ethical consumption isn’t the most important thing to focus on. Hypothetical example: If I think AMF is the best donation opportunity, but donating to The Humane League is better than going vegetarian (because it would be very cheap to “offset” my diet), it shows that donations to AMF are very much better than going vegetarian. This shows going vegetarian makes a small contribution to my potential social impact, so I shouldn’t do it unless it involves negligible sacrifice.
As an option for non-consequentialist minded people who don’t just want to focus on the best activities, because they have special obligations to avoid doing certain types of harm.
It doesn’t seem like EAs promote ethical offsetting as a generally good thing to do. Rather, EAs suggest identifying the highest leverage ways for you to make a difference in the world, and focusing your attention on those. (and not worrying about other ways to have more impact that involve more sacrifice)
I don’t think many EAs spend a lot of time promoting it, but I hear EAs discuss the idea positively (and, I think, uncritically) with one another from time to time. It was more common shortly following the SSC article.
Does it actually show this? I generally hear the argument go something like this:
You can probably convert a lot of vegetarians by donating to The Humane League, which is better than becoming vegetarian yourself. Therefore donating to THL is better than being vegetarian.
Naive estimates say THL does more good than AMF, but AMF has much more robust evidence than THL, so donating to AMF is better.
Therefore donating to AMF is better than being vegetarian.
Parts 1 and 2 use contradictory claims. Part 1 claims that naive expected value dominates, and part 2 claims that robustness of evidence dominates.
Michael, do you have an example? I’ve never seen the union of those 3 in one argument before, although I have seen each of the three claims made by different people.
E.g. it doesn’t describe this post by Jeff Kaufman or this by Greg Lewis. The usual reasons I hear from such people favoring AMF over THL are greater flow-through effects or lower weight on nonhuman animals.
Separately, I hear people, e.g. Tom Ash and Peter Hurford, saying something like #2, but they are themselves vegetarian, and not making arguments for offsetting that I have seen. Indeed, they have challenged it on the basis that the estimates for ACE charities are not robust, which is consistent and contra the argument you described.
You’re correct that Tom and I both assert something along the lines of #2 but have never argued #3.
I hear people separately make #1 and #2, I can’t recall hearing someone say both #1 and #2 in a single breath. But if you favor AMF over THL because AMF has stronger evidence behind it, that doesn’t preclude going vegetarian. “AMF is better than THL” is not a good argument against being vegetarian, and doesn’t show that vegetarianism is negligible compared to AMF donations, which is the argument Ben was quoting.
So you don’t actually hear people making the argument you mentioned, and the published arguments by Kaufman and Lewis don’t suffer from the inconsistency you mention? Kaufman makes an argument that counting human and cow lives equally, modest AMF donations can be a bigger deal than dairy consumption, while Lewis argues that if one takes ACE estimates seriously, then modest donations to ACE-recommended charities can be a bigger deal than general carnivory.
On the question of donations to AMF vs THL, Kaufman weights AMF over ACE charities because he cares less about nonhuman animals than humans. Some others do so because of flow-through effects. Lewis is vegetarian, but I think mainly donates to poverty and existential risk related things, and I don’t know his precise reasons but they aren’t germane to his essay.
“is the argument Ben was quoting.”
Ben’s description didn’t specify someone thinking AMF was better because they didn’t believe in the robustness of THL ‘animals spared’ estimates. You inserted that, which created the tension in your hypothetical argument. People who favored AMF over THL because of flow-through effects, or because of weighting humans more, wouldn’t have that tension (I would argue the flow-through view would create other tensions, but that’s a different story).
I think you’re right actually. A lot of people who prefer AMF to THL are still vegetarian, and that’s totally reasonable and self-consistent.
One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it’s fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
Good point.
Couldn’t one argue that offsetting harms that people outside EA care about counts as cooperating with mainstream people to some degree? In practice the way this often works is by improved public relations or general trustworthiness, rather than via explicit tit for tat. Anyway, whether this is worthwhile depends how costly the offsets are (in terms of money and time) relative to the benefits.
Thanks, I agree. It still seems to me that a) mainstream people probably matter somewhat less than specific groups, b) we should think about how mainstream people would like to be helped, and that may or may not be through offsetting.
I just discovered this related and entertaining passage from Tim Harford’s The Undercover Economist (2005).
I think offsetting makes sense when seen as a form of moral trade with other people (or even possibly other factions within your own brain’s moral parliament).
Regarding objection #1 about reference classes, the answer can be that you can choose a reference class that’s acceptable to your trading partner. For example, suppose you do something that makes the global poor slightly worse off. Suppose that a large faction of society doesn’t care much about non-human animals but does care about the global poor. Then donating to an animal charity wouldn’t offset this harm in their eyes, but donating to a developing-world charity would.
Regarding objection #2, trade by its nature involves spending resources on things that you think are suboptimal because someone else wants you to.
An objection to this perspective can be that in most offsetting situations, the trading partner isn’t paying enough attention or caring enough to actually reciprocate with you in ways that make the trade positive-sum for both sides. (For trade within your own brain, reciprocation seems more likely.)
I sympathise with the point you make with this post.
However, isn’t it antithetical to consequentialism, rather than EA? EAs can have prohibitions against causing harms to groups of people.
How does this speak to people who use rule-based ethics that obliges them to investigate the benefit of their charitable gifts?
This will make sense, except that pretty much every argument for offsets that I’ve seen comes from consequentialists or consequentialist-aligned people.
Offsetting doesn’t seem very virtuous, and deontologists generally have a poor model for positive rights/obligations.
I don’t think most nonconsequentialist theories provide a basis to accept offsetting either though. But I’d have to see some people make a positive case for it to know where they’re coming from.
It seems to depend on the harm. People accept off-setting for minor harms, but not for major ones.
I think they’re consistent with a Kantian perspective. Also, a risk averse consequentialist. Also, someone that likes to take responsibility for the consequences of their actions in a like for like manner for ethical-aesthetic reasons.
Often EAs propose offsetting as a counterargument to “if something harms others you must not do it”. So you show that offsetting is better than strict harm avoidance, and then you give reasons why you should instead focus on the most important things.
Offsetting isn’t antithetical to EA; to my mind it’s a step towards EA.
Notice that the narrowest possible offset is avoiding an action. This perfectly undoes the harm one would have done by taking the action. Every time I stop myself from doing harm I can think of myself as buying an offset of the harm I would have done for the price it cost me to avoid it.
I think your arguments against offsetting apply to all actions. The conclusion would be to never avoid doing harm unless it’s the cheapest way to help.
Yep. Except I think this would be most of the time, since it people tend to dislike it when you harm others in big or unusual ways, and doing so is often illegal. So at the very least you frequently take hits to your reputation (and the reputation of EA, theoretically) and effectiveness when you cause big unusual harms.
I am not aware of EA associated people using ethical offsets beyond a small amount they don’t consider part of their charity budget. Is there an “Ethical Offsetting is Great for EA” position you are arguing against?
It’s not very common but I’ve heard it promoted among EAs several times in different EA circles.
Jeff has advocated this.
I’m not advocating offsetting, but I don’t have a good name for what I am trying to advocate. The idea is that you should prioritize the activities that have the best tradeoff between downside-for-you and upside-for-others. There are ways that this is similar to offsetting (if you can show that the harm caused by X is less than the harm caused by not donating $Y then you should feel fine donating $Y instead of avoiding X) but in this framework you don’t get to donating via tallying up your harms and pricing them, instead you set out to do as much good as you can without making yourself miserable.
I think that your argument is much more likely to discourage people making reasonable use of ethical offsets than anyone engaged in the problem you describe, mostly based of the proportion of such people that actually exist. As such, I think publishing such an argument without having the opposed view being actually promoted by anyone you care to mention is irresponsible.
I wouldn’t make this argument in a context where I don’t think the vast majority of people reading it are EAs. It wouldn’t make sense in a none EA-dense context, since the argument is “offsetting isn’t EA” not “offsetting is bad and no one should do it”. Like I said, I think offsetting is better than nothing. The proportions are obviously very different in the EA community than outside it.
I don’t want to mention people because a) they may not want their views made public b) it might embarrass them to name them in a context where I’m being critical of their views, and c) in about 2⁄3 of the cases I remember the conversation was in person, so I can’t easily cite the argument anyway.
This is not at all obvious. All I hear about ethical offsets is at least EA adjacent.
Understanding all of this, I still say that it is net negative to publicly make your argument when there is nothing you can publicly cite as promoting what you argue against. If you notice such views in private communications, it may make sense to address them in those private communications.
If it’s all EAs or EA-adjacent people, then why would my post be “much more likely to discourage people making reasonable use of ethical offsets than anyone engaged in the problem [I] describe, mostly based of the proportion of such people that actually exist.” What do you mean by “reasonable use”? If it’s mostly EAs doing ethical offsetting (it isn’t), that makes it more likely that my post is helpful, since my post is more relevant for people with EA-ish goals.
Given that I notice people discussing offsetting in separate circles consistently, it makes sense to believe other people are having those conversations that I’m not aware of. In some of the cases, the conversation took place in a large group (between 20 and 30 people) where I didn’t have the opportunity to express my views fully (nor were they as fully developed at that time).
There are relatively public conversations (Jeff’s, as cited by Julia, comments on the SSC post, some others). I could cite sources (I can think of two more that are definitely online and not from private conversation). I am choosing not to because I’m not convinced it’s a helpful exercise.
If you don’t think people are interested in vegan offsetting, then why would telling them not to do it matter? It would probably not be impactful (harmful or helpful) if no one was interested in ethical offsetting to begin with.
I consider “reasonable use” to mean spending a small amount of money on offsets to purchase mental health in the form of not feeling guilty of small harms one might cause, where these offsets are not considered an EA activity, and one who considers themselves a part of EA would be spending more money, time, effort, whatever resource on something they chose for efficiency.
All advocacy for ethical offsets I have seen has been compatible with this reasonable use, and I don’t think anyone is doing the unreasonable thing of calling ethical offsets an EA activity or focusing their EA efforts on them, or saying anyone should do that.
Jeff’s article does not talk about ethical offsets. It says be careful about trading your happiness inefficiently for small gains in general utility, not anything about paying offsets instead.
The fact that you don’t think citing these sources is a helpful exercise is evidence that publicly arguing against them is also not a helpful exercise.
I think people are interested in reasonable offsetting, not offsetting as a primary activity. I think I have been clear about this.
I don’t care very much specifically about vegan offsets. I care a lot about the general category EAs being able to do small sub-optimal things that enable to them to focus more on their more optimal efforts, and to sustain that focus long term.
Yes!
Thanks for this post.
It’s like we came full circle from people donating minimal amounts of money to charity to relieve their guilt over their perpetuation of global injustice, to people working very hard and doing everything they can to fight global injustice, to people donating minimal amounts of money to relieve their guilt over their perpetuation of global injustice.
Just accept it. Some of your actions will harm others no matter what you do. The only way to make it worthwhile is to go out there and achieve lots of valuable things. Be confident and proud of what you accomplish and you can accept the harm that you will have to commit.
+1
I strongly agree. It’s like trying to avoid a trade deficit with every country you interact with. The currency of value is better if it’s not region-locked.
Thank you for this thought-provoking article! We want to make it the topic of our next meetup, so I’ve tried to clarify what my new position should be.
Your first two points are easily conceded—in my view everyone should direct their donations to the, in their view, most effective charity when offsetting. Your third point is most interesting.
Nino already married your and Scott’s positions, but I find it more useful to structure my thoughts in a list of pros and cons anyway.
On the pro side I see the following arguments:
Contrary to Claire’s point, I think offsetting also questions the act-omission distinction because instead of forgoing something, one engages in proactive activism. Having done that, it will be harder to later argue that doing good is supererogatory, because it would be inconsistent with one’s past behavior.
Offsetting can be used as a starting point to extend the circle of compassion in that a person could be brought to care enough about the harm inflicted by friends and family members to offset for them too. (But I haven’t seen this implemented.)
Charities that advocate for nonhuman animals are probably the most commonly chosen reference class, and they are highly funding constrained, possibly more than they are talent constrained, so that an additional regular donor may be worth many additional vegans.
Outside EA there are many nonveg*ns that are compassionate and want to reduce suffering but find that for them or in their context, veganism would be hard. Instead of resorting to the defensiveness and denigration discussed at the last meetup, they can join in with highly impactful donations.
Offsetting can counter the cliché that veg*ns are dogmatic Siths that only deal in absolutes.
Bridging the schism between veg*ns and nonveg*ns can help make advocacy for farmed animals a universally accepted movement, which would greatly simplify political advocacy.
On the con side I see the following arguments:
Offsetting also bolsters the act-omission distinction because it fails to provide incentives to scale one’s proactive activism beyond the low level of harm the average person inflicts, so that the offsetter will fall far short of their potential. (Unless they also offset for friends and family members or even larger circles.)
Offsetting may incur moral licensing when the satisfaction a person gains from “having donated” doesn’t scale in proportion with the size of the donation, so that a small donation makes further donations unlikely to the same extend that a large donation would have.
Advantage 3 only holds for our current state of an anti-inductive system. In a decade or two there will hopefully be a point when the suffering of farmed animals has been reduced sufficiently to make offsetting much more expensive. At that point, an additional veg*n will be more valuable than an additional offsetter given what the latter can be expected to be able to donate. In short, success in offsetting values spreading diminishes its own value. Core EA ideas don’t suffer from that problem.
Offsetting when described in terms of offsetting is only compatible with a subclass of consequentialist moralities, so that it’s impact is limited or the framing should be reconsidered.
Offsetting may signal a readiness to defect (in such situations as the prisoner’s dilemma or the stag hunt), which might interfere with the offsetter’s chances for trade with agents that are not value aligned.
Offsetting when described in terms of offsetting may in turn introduce (or aggravate) the schism between deontological and consequentialist veg*ns.
When offsetting funds are taken from a person’s EA budget, it is at best meaningless because the money would’ve been donated effectively anyway, and likely harmful if the reference class is chosen to exclude the most effective giving opportunities.
When offsetting becomes associated with EA, it may increase the perceived weirdness of EA, making it harder for people to associate with more important ideas of EA.
Some of the disadvantages only limit the scope of offsetting, others could be avoided with different rhetoric. What other pros or cons did I forget?
Cool, this mostly seems right.
I think the harmfulness of offsetting’s focus on collectively anthropogenic sources of suffering is still being underestimated in these conversation. (I’m using “collectively anthropogenic” because there are potential sources of badness like UFAI that are anthropogenic, but only caused by a few people to the idea of offsetting would be useless to spread to most people to address the problem of UFAI. Also, offsetting the harm done by UFAI would be, uh, tricky.) I think offsetting might even reenforce a non-interventionist mindset that could prove extremely harmful for addressing problems like wild animal suffering.
One good aspect of offsetting that I think I initially underestimated is the way it can be used as a psychological tool for beginning to alieve that a cause area matters. For example, I can imagine an individual who is beginning to suspect animals suffering is important, but finds the idea of vegetarianism or veganism daunting, and shies away from it and thus doesn’t want to think more about animal suffering. For them, offsetting could be a good bridge step. I don’t think this conflicts with anything I said, but I don’t want people to feel like it’s shameful to use this tool.
I’d want to add on to:
Pro 3: If you’re just offsetting, it’s worth only as much as one additional vegan (if your numbers are right). I haven’t seen evidence that ethical offsetting leads to big regular donors. It may, and if you just meant to bring up the possibility that seems reasonable.
Pro 4: People who eat animal products can donate to animal charities even if it’s not offsetting. That’s great! But you don’t need offsetting to introduce that possibility. I think offsetting harmfully frames the discussion around them “making up” for their behavior, instead of possibly just making large donations that help lots of animals. Many vegetarians enthusiastically make large donations to animal charities, which is wonderful, without worrying about offsetting. I don’t know what happened at your last meetup but I think it’s awesome when nonvegans donate to animal charities. Pro 6: I’m not sure how offsetting helps bridge this schism well. I can imagine some arguments about how it would help, and others about how it would hurt.
Con 5: I’m not sure how offsetting signals a willingness to defect. Could you explain that more?
Collectively anthropogenic sources of suffering: True, and that class of suffering is already broad. I wouldn’t expect people to extend their circle of compassion to even just the harm caused by all of humanity just via the idea of offsetting. The friends and family scenario is probably already the limit.
Psychological tool: Indeed. This tool is also one that can be employed without using the term “offsetting,” like “If veganism is too hard for you at this point, just reduce chicken, eggs, and fish. You can also donate to one of ACE’s top charities. That might seem too easy, but at the moment a donation of just $50 allows you to do as much good for the animals as being vegan for a year.” (Well, basically Ben’s point.)
A related problem is figuring out whether the supplements I buy are overpriced compared to an animal product plus top charity donation counterfactual. I wonder if I can just straight compare the prices or whether there are any multipliers I’m overlooking.
About pro 3: Yes, that’s what I meant, the average regular donor compared to the average vegan minus any donations they might make.
About pro 4: The framing we’ve come up with is one for older people who have a harder time changing their habits, namely that they’re donating to create a better society for the next generation. Offsetting isn’t mentioned, but you can still get nonveg*ns donating.
About pro 6: The topic of our last meetup was the threat of unfavorable social moral comparison, that some people trivialize or denigrate people or the behavior of people who they perceive as being more moral. I seem to be well filter-bubbled against such people, but studies have found that a lot of nonveg*ns are ascribing various nasty terms to veg*ns.
When animal advocacy has to fight against such strong forces as people trying to protect their identities and self-image against it, it’ll remain an uphill battle and be labeled as “controversial,” whereas, when we can invite a wide range of people into the movement, we may not be producing the best activists, but we’ll be reducing opposition. (The reducetarian movement is working on that too.) How might offsetting hurt this exact cause?
About con 5: Not compared to nonveg*ns but compared to deontological veg*ns. Then again a given nonveg*n could be assumed to be nonveg*n out of ignorance, while the same could not be assumed about an offsetter. When you’re offsetting you could be seen as defecting against some animals to save other animals (except that nonhuman animals are not really “agenty”).
For example, when a profit-oriented employer pays a person to deliver some pointless advertisement to hundreds of households, and the person does that in order to donate a portion to a charity the employer doesn’t care about, then this deal might work just fine. But when the employer sees that a potential employee has a history of defecting in such arrangements to further their moral goal, the employer may imagine that the potential employee will sell the advertisement to a company that buys scrap paper to donate even more and save time that they can use to swindle several advertisement companies in parallel. So it might hurt a person’s–or more likely, a group’s or movement’s–reputation.
did this happen at the MeetUp? outcomes?
Oops, too long ago; I don’t remember. But I don’t think I updated any more that evening. Not entirely sure.
Edit: I’ve posted before reading others comments. Others have already made this an similar points.
Here is a story of how ethical offsetting can be effective.
I was trying to decide if I should fly or go by train. Flying is much faster and slightly cheaper, but train is much more environmentally friendly. With out the option of environmental offset, I have no idea how to compare these values, i.e. [my time and money] v.s. [direct environmental effect of flying].
What I did was to calculate what offsetting would cost, and it turned out to be around one USD, so basically nothing. I could now conclude that:
Flying + offsetting > Going by train
Because I would save time, and I could easily afford to offset more than the harm I would do by flying, and still pay less in total.
Now, since I’m an EA I could also do the next step
Flying + donating to the most effective thing > Flying + offsetting > Going by train.
But I needed at least the idea of offsetting to simplify the calculation to something I could manage my self in an afternoon. In the first step I compare things that are similar enough so the comparison is mostly straight forward. The second step is actually super complicated, but it’s the sort of thing EAs has been doing for year, so for this I can fall back on others.
But I’m not sure how I would have done the direct comparison between [flying + donating] v.s. [going by train]. I’m sure it’s doable some how, but with the middle step, it was so much much easier.
While I agree that offsetting isn’t the best thing to spend resources on, I don’t like the framing of it being ‘antithetical to EA’. Whether offsetting is a good idea or not is a good, object-level discussion to have. Whether it is aligned with or antithetical to EA brings in a lot more connotations, with little to gain:
People who liked offsetting since earlier might think that EA isn’t for them.
People who like the EA-community and do offset might worry whether this means that they aren’t ‘EA enough’ (without even reading the arguments).
People who are in favor of utilitarian reasoning but don’t like the EA community might ignore the arguments.
The comment section might be used to discuss the definition of EA, instead of whether offsetting is a good idea or not.
Offsetting can also be viewed as deciding to co-operating in a tragedy of the commons like situation. If a large enough proportion of the population/businesses decided to offset their emissions then presumably global warming would cease to be an issue. This would cost everyone a small amount individually, but the individual gain would be large. Perhaps the money could do more good elsewhere, but defecting simply encourages more people to defect as well and possibly causes the whole deal to collapse.
Not that I offset my carbon, just an interesting thought.
If everyone “defected” by donating to the most effective charity instead of offsetting, the whole deal wouldn’t collapse. The world would be a better place.
So if the problem is that people are copycats so doing a thing encourages other people to do the same, it’s better to donate more to an effective charity than to offset, since when people copy you doing that it will make the world even better.
A problem is that different people have different views on what’s most effective. If most people are quasi-egoists, then for them, spending money on themselves or their families is “the most effective charity” they can give to. Or even within the realm of what’s normally understood to be charity, people might donate to their local church or arts center. Relative to their values, this might be the best charity to give to.
The worry is that enough people will defect from the current social norms so that they break down, but not enough people defect to create a new norm of donating to effective charities instead.
Neither an “offset your harm” nor a “donate to effective charities” norm are especially well established in the general population, though. Your argument sounds like it’s based on the former being widespread?
Global warming offsets are pretty big.
The idea of global warming offsets is pretty widespread, but I don’t think a norm of buying them is. Specifically, I don’t think either that they’re very widely bought or even seen as something you’re supposed to buy.
(My impression is that it’s catching on as a norm among sustainably minded companies, though.)
“I’ve previously discussed my concerns about the obstacles to changing one’s mind about cause prioritization, and I can imagine ethical offsetting at the cause area level being used to remind oneself about various causes of suffering in the world and the organizations working to stop them. This could make it easier to change one’s mind about what’s most effective. It seems somewhat plausible that offsetting would help make the community better at updating and better informed.”
This has roughly been my reasoning for considering donating small sums to Animal suffering as a cause area and Climate Change as a cause area. (Though I haven’t done so yet.) I think it helps people to keep an open mind and am therefore happy to see them offsetting their ‘wrong’ behaviour.
I agree with Ryan’s and Linch’s comments as well.
I’m not sure that offsetting is better than nothing—it may actually be harmful:
1. Offsetting fools people into thinking that their emissions from (eg) flying can be “made harmless” in some way, whereas the bald physical reality is that for flight emissions, they are the most dangerous emissions, in the most fragile part of the atmosphere (apart from ESAS methane release, and long term impact of HCFCs and HFCs).
2. It’s harmful to help persuade people it’s fine to pollute and pay, rather than actually reduce actual emissions, especially if most offsets don’t genuinely lead to real and lasting net emissions reductions.
3. Offsetting is a way that corporations can make out that they and their customers are somehow not causing net harm ie. off-setting contributes to corporate greenwash, which is a form of lying.
“And as Scott Alexander points out, offsetting could lead people to think it’s acceptable to do big harmful things as long as they offset them.”
I think it would be helpful to distinguish between the claims (1) “given that one has imposed some harm, one is obligated to offset it” and (2) “any imposition of harm is justified if it is offset.” This article argues against the first claim, while Scott argues that the second one seems false. It seems pretty easy to imagine someone accepting (1) and rejecting (2), and I’d be pretty skeptical of a causal connection between promoting (1) and more people believing in (2). The reverse seems just as (un)likely: “hey, if I don’t have to offset my harms, maybe causing harm doesn’t really matter to begin with.”
I don’t think the causal link between (1) and (2) is weak at all, but agree that the reverse is also likely, which is why I mentioned it: “the argument that we should focus on doing lots of good rather than fixing harms we cause could drive destructive thoughtlessness about personal behavior, so I’m wary about making it too frequently.”
Scott discusses claim (2) in his section III (below)
“The second troublesome case is a little more gruesome.
Current estimates suggest that $3340 worth of donations to global health causes saves, on average, one life.
Let us be excruciatingly cautious and include a two-order-of-magnitude margin of error. At $334,000, we are super duper sure we are saving at least one life.
So. Say I’m a millionaire with a spare $334,000, and there’s a guy I really don’t like…
Okay, fine. Get the irrelevant objections out of the way first and establish the least convenient possible world. I’m a criminal mastermind, it’ll be the perfect crime, and there’s zero chance I’ll go to jail. I can make it look completely natural, like a heart attack or something, so I’m not going to terrorize the city or waste police time and resources. The guy’s not supporting a family and doesn’t have any friends who will be heartbroken at his death. There’s no political aspect to my grudge, so this isn’t going to silence the enemies of the rich or anything like that. I myself have a terminal disease, and so the damage that I inflict upon my own soul with the act – or however it is Leah always phrases it – will perish with me immediately afterwards. There is no God, or if there is one He respects ethics offsets when you get to the Pearly Gates.
Or you know what? Don’t get the irrelevant objections out of the way. We can offset those too. The police will waste a lot of time investigating the murder? Maybe I’m very rich and I can make a big anonymous donation to the local police force that will more than compensate them for their trouble and allow them to hire extra officers to take up the slack. The local citizens will be scared there’s a killer on the loose? They’ll forget all about it once they learn taxes have been cut to zero percent thanks to an anonymous donation to the city government from a local tycoon.
Even what seems to me the most desperate and problematic objection – that maybe the malarial Africans saved by global health charities have lives that are in some qualitative way just not as valuable as those of happy First World citizens contributing to the global economy – can be fixed. If I’ve got enough money, a few hundred thousand to a million ought to be able to save the life of a local person in no way distinguishable from my victim. Heck, since this is a hypothetical problem and I have infinite money, why not save ten local people?
The best I can do here is to say that I am crossing a Schelling fence which might also be crossed by people who will be less scrupulous in making sure their offsets are in order. But perhaps I could offset that too. Also, we could assume I will never tell anybody. Also, anyone can just go murder someone right now without offsetting, so we’re not exactly talking about a big temptation for the unscrupulous.” (http://slatestarcodex.com/2015/01/04/ethics-offsets/)
I agree with this, and have written similarly here:
Good points, but I would go further, having worked in this field both with meteorologists and politicals.
Individual Offsets are easier than behaviour change to do, so a handy sop to guilty conscience of middle class people, who want to keep driving and flying, so perfect for self-deception.
More here: www.rationalreflection.net/can-we-offset-immorality
Thus offsets at individual and local level = advanced greenwash, wrapped up as an environmental project.
In fact, most offsets are deeply flawed and many, particularly renewable energy projects (which may help with health and education and have many other justifications) lead to INCREASED emissions—as they tend to lead to purchases of electrical goods, and so a huge increase in energy use locally and in the countries manufacturing the goods, even with reducing carbon intensity.
The best destinations for carbon funds probably include wetland protection in semi-arid regions (see my own www.theglobalcoolingproject.com) or climate campaign groups eg EIA for their HCFC work on the Montreal/Kigali protocols or anyone working on aircraft emissions or in India/China or on combating denialism or bridging political divides (eg George Marshall from Climate Outreach).