Other forms of morality aren’t in competition with EA and don’t subvert EA. If anything they contribute to the general desire to build a more moral world.
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don’t only do EA things, and I’d agree with you. And you can say people shouldn’t let EA ideas determine all their behaviors, and I’d also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn’t mean sinning isn’t antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Ethical offsetting isn’t an “anti-EA meme” any more than “be vegetarian” or “tip the waiter” are “anti-EA memes”. Both involve having some sort of moral code other than buying bednets, but EA isn’t about limiting your morality to buying bednets, it’s about that being a bare minimum.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they’re merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it’s being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it’s probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it’s a big step back.
That said, I agree with you that:
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg “I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it”, or a literal way “I am actually going to pay 0.01 cents to offset the costs of this shower.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
I think “do as much good as possible” is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it’s counterproductive to define this in terms of “well, I guess they failed at EA, but everyone fails at things, so that’s fine”; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn’t seem very friendly (see also my response to Squark above).
My interpretation of EA is “devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible”. This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don’t think anybody would think of this as “anti-EA”. However, it’s not very efficient—if the calculations I’ve seen around are correct, then despite being a major life choice that seriously limits your food options, it’s worth no more than a $5 − 50 donation to an animal charity. This isn’t “the most effective thing” by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes—it’s part of their personal morality that’s not necessarily subsumed by EA, and it’s not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it’s part of some people’s personal morality, and it’s not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn’t going to EA charity, it’s not hurting EA for the person to give it to offsets instead of, I don’t know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that’s the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending “stop offsetting and become vegetarian” results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending “stop offsetting but don’t become vegetarian” results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn’t seem strictly worse is “stop offsetting and donate the $5 to a charity more effective than the animal charity you’re giving it to now”. But why should we be more concerned about making them give the money they’re already using semi-efficiently to a more effective charity, as opposed to starting with the money they’re spending on clothes or games or something, and having the money they’re already spending pretty efficiently be the last thing we worry about redirecting?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn’t have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it’s not the most effective thing to do (with your EA money).
The two claims don’t seem incompatible with each other, unless I’m missing something.
[written when very tired]
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don’t only do EA things, and I’d agree with you. And you can say people shouldn’t let EA ideas determine all their behaviors, and I’d also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn’t mean sinning isn’t antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they’re merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it’s being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it’s probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it’s a big step back.
That said, I agree with you that:
I think “do as much good as possible” is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it’s counterproductive to define this in terms of “well, I guess they failed at EA, but everyone fails at things, so that’s fine”; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn’t seem very friendly (see also my response to Squark above).
My interpretation of EA is “devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible”. This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don’t think anybody would think of this as “anti-EA”. However, it’s not very efficient—if the calculations I’ve seen around are correct, then despite being a major life choice that seriously limits your food options, it’s worth no more than a $5 − 50 donation to an animal charity. This isn’t “the most effective thing” by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes—it’s part of their personal morality that’s not necessarily subsumed by EA, and it’s not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it’s part of some people’s personal morality, and it’s not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn’t going to EA charity, it’s not hurting EA for the person to give it to offsets instead of, I don’t know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that’s the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending “stop offsetting and become vegetarian” results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending “stop offsetting but don’t become vegetarian” results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn’t seem strictly worse is “stop offsetting and donate the $5 to a charity more effective than the animal charity you’re giving it to now”. But why should we be more concerned about making them give the money they’re already using semi-efficiently to a more effective charity, as opposed to starting with the money they’re spending on clothes or games or something, and having the money they’re already spending pretty efficiently be the last thing we worry about redirecting?
Aren’t you kind of not disagreeing at all here?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn’t have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it’s not the most effective thing to do (with your EA money).
The two claims don’t seem incompatible with each other, unless I’m missing something.