This comment captures a lot of my concerns about offsetting arguments in the context of veganism, as well as more generally. Spelled out a bit more, my worry for EAs is that we often:
1.Think we ought to donate a large amount
Actually donate some amount that is much smaller than this but much larger than most people
Discourage each other from sanctioning people who are donating much more than other people, for not donating enough
Offsetting bad acts can presumably fall into the same pool as other donations, which leads to the following issue:
let’s say that Jerry goes around kicking strangers, and also donates 20% of his income to charity, and let’s also stipulate that Jerry really thinks he ought to donate 80% of his income to charity, and that 10% of his income is enough to offset his stranger kicking. Now you might be tempted to criticize Jerry for kicking strangers, but hold on, 10 percentage points of his donations cancel out this stranger kicking, would we be criticizing Jerry for only donating 10% of his income to charity? If not, it seems we cannot criticize Jerry. But wait a minute, later we learn that Jerry actually would have donated 30% of his income to charity if he wasn’t stranger kicking, so we were wrong, his stranger kicking isn’t canceled out by his donations, it actually makes his donations worse!
Since many EAs have ideal donating thresholds much higher than they will ever reach, we don’t have a default standard to anchor their offsetting to, everything falls short by some significant amount. And since we discourage people from criticizing those who give a good deal but not enough, Jerry wouldn’t get sanctioned much more for donating 10% rather than 30%, the ethics just aren’t high enough resolution for that. The upshot is that Jerries can get away with doing almost arbitrary amounts of dickish things and not necessarily doing anything to compensate that we could hold them accountable to. Moral hazard and slippery slope arguments can be suspicious, but this is one I am fairly confident is a real problem with offsetting, at least for EAs.
Right. The problem with offsetting is that rather than (1) doing something bad (eg kicking (medieval) peasants) and then (2) offsetting it somehow (eg by donating money), the better outcome is where you do (2), ie the offset and then just don’t do the bad thing at all.
Someone might claim they won’t do (2) unless they do (1), and therefore the better outcome is that they do both (1) and (2) rather than neither (1) or (2). But this is deeply suspicious and suggests a very contorted psychology. (“Funny thing is that if I don’t kick the peasant, I just can make myself donate, actually. Soooo, are you going to line him up for me or shall I do it?”)
Amanda Askell, Tyler John, and Hayden Wilkinson have an excellent paper on offsetting but I don’t think it’s public. Here’s a link to some earlier work by Amanda that was all I could find after a quick google.
I’m excited to read it when it comes out! I’ve read Askell’s post on it before, I think it’s mostly right, though I don’t think it gets at the potential problems with offsetting for even more mild harms enough.
This comment captures a lot of my concerns about offsetting arguments in the context of veganism, as well as more generally. Spelled out a bit more, my worry for EAs is that we often:
1.Think we ought to donate a large amount
Actually donate some amount that is much smaller than this but much larger than most people
Discourage each other from sanctioning people who are donating much more than other people, for not donating enough
Offsetting bad acts can presumably fall into the same pool as other donations, which leads to the following issue:
let’s say that Jerry goes around kicking strangers, and also donates 20% of his income to charity, and let’s also stipulate that Jerry really thinks he ought to donate 80% of his income to charity, and that 10% of his income is enough to offset his stranger kicking. Now you might be tempted to criticize Jerry for kicking strangers, but hold on, 10 percentage points of his donations cancel out this stranger kicking, would we be criticizing Jerry for only donating 10% of his income to charity? If not, it seems we cannot criticize Jerry. But wait a minute, later we learn that Jerry actually would have donated 30% of his income to charity if he wasn’t stranger kicking, so we were wrong, his stranger kicking isn’t canceled out by his donations, it actually makes his donations worse!
Since many EAs have ideal donating thresholds much higher than they will ever reach, we don’t have a default standard to anchor their offsetting to, everything falls short by some significant amount. And since we discourage people from criticizing those who give a good deal but not enough, Jerry wouldn’t get sanctioned much more for donating 10% rather than 30%, the ethics just aren’t high enough resolution for that. The upshot is that Jerries can get away with doing almost arbitrary amounts of dickish things and not necessarily doing anything to compensate that we could hold them accountable to. Moral hazard and slippery slope arguments can be suspicious, but this is one I am fairly confident is a real problem with offsetting, at least for EAs.
Right. The problem with offsetting is that rather than (1) doing something bad (eg kicking (medieval) peasants) and then (2) offsetting it somehow (eg by donating money), the better outcome is where you do (2), ie the offset and then just don’t do the bad thing at all.
Someone might claim they won’t do (2) unless they do (1), and therefore the better outcome is that they do both (1) and (2) rather than neither (1) or (2). But this is deeply suspicious and suggests a very contorted psychology. (“Funny thing is that if I don’t kick the peasant, I just can make myself donate, actually. Soooo, are you going to line him up for me or shall I do it?”)
Amanda Askell, Tyler John, and Hayden Wilkinson have an excellent paper on offsetting but I don’t think it’s public. Here’s a link to some earlier work by Amanda that was all I could find after a quick google.
I’m excited to read it when it comes out! I’ve read Askell’s post on it before, I think it’s mostly right, though I don’t think it gets at the potential problems with offsetting for even more mild harms enough.