I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don’t give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there’s also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that’s taking ethical behavior in your personal life too far. But I also think that it’s possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with “Well, I’m an EA, who cares?” But nobody knows exactly how far is too far vs. not far enough, and EA doesn’t help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg “I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it”, or a literal way “I am actually going to pay 0.01 cents to offset the costs of this shower.”
As such, I think all of your objections to offsetting fall short:
The reference class doesn’t particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you’re only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.
Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.
Ethical offsetting isn’t an “anti-EA meme” any more than “be vegetarian” or “tip the waiter” are “anti-EA memes”. Both involve having some sort of moral code other than buying bednets, but EA isn’t about limiting your morality to buying bednets, it’s about that being a bare minimum. Once you’ve done that, you can consider what other moral interests you might have.
People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That’s okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That’s also okay. As long as they’re not taking it out of the money they’ve pledged to effective charity, it’s not EA’s business whether they want to do that or not, just as it’s not EA’s business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren’t in competition with EA and don’t subvert EA. If anything they contribute to the general desire to build a more moral world.
Other forms of morality aren’t in competition with EA and don’t subvert EA. If anything they contribute to the general desire to build a more moral world.
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don’t only do EA things, and I’d agree with you. And you can say people shouldn’t let EA ideas determine all their behaviors, and I’d also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn’t mean sinning isn’t antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Ethical offsetting isn’t an “anti-EA meme” any more than “be vegetarian” or “tip the waiter” are “anti-EA memes”. Both involve having some sort of moral code other than buying bednets, but EA isn’t about limiting your morality to buying bednets, it’s about that being a bare minimum.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they’re merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it’s being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it’s probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it’s a big step back.
That said, I agree with you that:
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg “I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it”, or a literal way “I am actually going to pay 0.01 cents to offset the costs of this shower.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
I think “do as much good as possible” is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it’s counterproductive to define this in terms of “well, I guess they failed at EA, but everyone fails at things, so that’s fine”; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn’t seem very friendly (see also my response to Squark above).
My interpretation of EA is “devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible”. This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don’t think anybody would think of this as “anti-EA”. However, it’s not very efficient—if the calculations I’ve seen around are correct, then despite being a major life choice that seriously limits your food options, it’s worth no more than a $5 − 50 donation to an animal charity. This isn’t “the most effective thing” by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes—it’s part of their personal morality that’s not necessarily subsumed by EA, and it’s not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it’s part of some people’s personal morality, and it’s not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn’t going to EA charity, it’s not hurting EA for the person to give it to offsets instead of, I don’t know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that’s the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending “stop offsetting and become vegetarian” results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending “stop offsetting but don’t become vegetarian” results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn’t seem strictly worse is “stop offsetting and donate the $5 to a charity more effective than the animal charity you’re giving it to now”. But why should we be more concerned about making them give the money they’re already using semi-efficiently to a more effective charity, as opposed to starting with the money they’re spending on clothes or games or something, and having the money they’re already spending pretty efficiently be the last thing we worry about redirecting?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn’t have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it’s not the most effective thing to do (with your EA money).
The two claims don’t seem incompatible with each other, unless I’m missing something.
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.
I gave the example of giving 10% to bed nets because that’s an especially clear example of a division between charitable and non-charitable money—eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there’s no cost to EA to giving that to offsets instead. I know many other EAs work this way too.
If you believe this isn’t enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there’s such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you “should have” given to charity.
But if you’re not working off a model where you have to agonize over everything, I’m not sure why you should agonize over offsets.
I don’t think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don’t reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
If you believe this isn’t enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there’s such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you “should have” given to charity.
No. Have you tried it? I have. It works fine for me.
Maybe some people are too addicted to modern comforts or maybe they can’t handle the stress and pity they feel when thinking about charity. Sucks for them, but it’s a pragmatic issue which doesn’t directly change the moral issue.
(Two years later...)
I have tried it. It’s a disaster for me. Every time I buy food, I think, “Someone else needs this food more than me,” which is an accurate statement but takes me to a dark place.
Rather. That’s why I’ve donated a set percentage for about a decade now. “Set and forget” direct debits are both easier and more effective than constantly questioning which expenses are strictly necessary and which are luxuries. Budgeting how much goes to charity and how much goes to my expenses also makes it easier to get along with friends and family. “Sorry, that’s not in the budget” is easier than “Sorry, visiting you is less important than deworming strangers’ children.”
It seems straightforward to realize that you need food so that you can go about your business of making the world better. A soldier in WWII did not feel some kind of moral pain at the fact that he was getting more meat in his rations than the civilians back home. To agonize or “self-lacerate” about this common-sense logic is an abnormal pathology which is specific to certain types of people who join EA. So I understand that it doesn’t work for you, but I think that’s not representative of how most people will think, and it’s worth making a real effort to learn to get along with the rational line of thought.
I think characterizing thought-patterns as “abnormal” isn’t helpful for the person you’re addressing, and isn’t good for our community’s discourse.
Well it is not normal. That’s what abnormal means. I think that the most helpful thing is to tell the truth. I have abnormal thought patterns too, it doesn’t perturb me to recognize it.
Especially when the thought-pattern in question is fairly common around these parts.
No, that is exactly when it is most important to say “hey, this is not a foregone conclusion, you are in a bit of an echo chamber”.
Also “how most people think” isn’t a good benchmark for “how should we think.”
Sure, what is rational is a good benchmark for how we should think, and it’s rational to eschew hard rules about what percentage of your money is luxurious versus what percentage is charitable.
I am using “how most people think” as a good benchmark for how we can think, and what I am pointing out here is that it is possible to adopt the rational way of thinking without going crazy and self-flagellating.
I think that the most helpful thing is to tell the truth. I have abnormal thought patterns too, it doesn’t perturb me to recognize it.
This reads a bit like “hey, I have the same thing you’re having, but it’s not a problem for me. Maybe if you just snapped out of it, it wouldn’t be a problem for you either!”
I think this sort of framing lacks compassion & can exacerbate things.
it’s rational to eschew hard rules about what percentage of your money is luxurious versus what percentage is charitable.
I don’t follow this; could you expand on it a little?
This reads a bit like “hey, I have the same thing you’re having, but it’s not a problem for me. Maybe if you just snapped out of it, it wouldn’t be a problem for you either!”
But I didn’t say “Maybe if you just snapped out of it, it wouldn’t be a problem for you either,” I said it was abnormal.
I think this sort of framing lacks compassion & can exacerbate things.
If you have a better way of framing the same facts, feel free to present it.
I don’t follow this; could you expand on it a little?
Well there isn’t any basis for it, and it contradicts consequentialism, it contradicts deontology, really I can’t think of any framework that says that you should make a budget such that a percentage of your money is a carte blanche gift to you that is independent of the considerations of benevolence and distributive justice. In all sensible moral theories, the needs of others count as a pro tanto reason to donate any amount of your money.
But, like… What you said made me feel bad and was also unhelpful. I gained nothing from it, and lost a good mood. So why say it?
If you had suggested a useful resource or alternative, I would have thought your comment had merit.
Alternatively, you could have shown compassion by reflecting back what you heard—saying something like, “It sounds like making trade-offs on a daily basis is very emotional for you, so you donate a set percentage to cope. That might be the best solution for you right now. However, that doesn’t mean it’s the best solution for everyone.”
What you said made me feel bad and was also unhelpful. I gained nothing from it, and lost a good mood. So why say it?
Obviously we don’t always make comments that help the other person; your comment, for instance, did not help me at all, because I am 100% content with abolishing the charitable/non-charitable distinction in my budget, and need no help from anyone with figuring it out. Yet you made your comment nonetheless, presumably for the benefit of others, so they might know your experience, or for the benefit of me, just that I might know more about your experience. Likewise, I made my comment for the benefit of anyone else who is reading to persuade them that your experience is atypical, and to persuade you that your experience is atypical.
I didn’t aim to make you feel bad.
Alternatively, you could have shown compassion
But I don’t feel compassion for people just because they have arrived at some kind of existential angst, I feel compassion for people when they have a more severe problem, so if I expressed sorrow here then I’d be dishonest.
saying something like, “It sounds like making trade-offs on a daily basis is very emotional for you,
I quite clearly said “I understand that it doesn’t work for you.” All you are doing is pleading for more cushions around my words. Such effort would be better spent thinking about whether my statements are correct or not, or just moving on with your life.
Likewise, effort on my part is better spent on other things besides adding such cushions. You clearly said yourself that such decisions are very emotional for you, so it’s obvious to every reader that they are very emotional for you, and if you have a basic level of respect for my reading comprehension abilities then you will presume that I understood your statement that such decisions are very emotional for you, and obviously I did nothing to disagree with that fact—it is, after all, not the sort of thing that can be reasonably disagreed with from a distance. So to merely repeat this obvious fact, which is understood by everyone to be understood by everyone, would be a waste of time.
However, that doesn’t mean it’s the best solution for everyone.”
But I don’t merely believe it’s not the best solution for everyone, I believe it’s the wrong solution for most people, so this would be an inaccurate representation of my position.
I don’t think ethical offsetting is antithetical to EA. I think it’s orthogonal to EA.
We face questions in our lives of whether we should do things that harm others. Two examples are taking a long plane flight (which may take us somewhere we really want to go, but also release a lot of carbon and cause global warming) or whether we should eat meat (which might taste good but also contribute to animal suffering). EA and the principles of EA don’t give us a good guide on whether we should do these things or not. Yes, the EA ethos is to do good, but there’s also an understanding that none of us are perfect. A friend of a friend used to take cold showers, because the energy that would have heated her shower would be made by a polluted coal plant. I think that’s taking ethical behavior in your personal life too far. But I also think that it’s possible to take ethical behavior in your personal life not far enough, and counterproductively shrug it off with “Well, I’m an EA, who cares?” But nobody knows exactly how far is too far vs. not far enough, and EA doesn’t help us figure that out.
Ethical offsetting is a way of helping figure this out. It can be either a metaphorical way, eg “I just realized that it would only take 0.01 cents to offset the damage from this shower, so forget about it”, or a literal way “I am actually going to pay 0.01 cents to offset the costs of this shower.”
As such, I think all of your objections to offsetting fall short:
The reference class doesn’t particularly matter. The point is that you worried you were doing vast harm to the world by taking a hot shower, but in fact you’re only doing 0.01 cents of harm to the world. You can pay that back to whoever it most soothes your conscience to pay it back to.
Nobody is a perfectly effective altruist who donates 100% of their money to charity. If you choose to donate 10% of your money to charity, that remaining 90% is yours to do whatever you want with. If what you want is to offset your actions, you have just as much right to do that as you have to spend it on booze and hookers.
Ethical offsetting isn’t an “anti-EA meme” any more than “be vegetarian” or “tip the waiter” are “anti-EA memes”. Both involve having some sort of moral code other than buying bednets, but EA isn’t about limiting your morality to buying bednets, it’s about that being a bare minimum. Once you’ve done that, you can consider what other moral interests you might have.
People who become vegetarian believe that, along with their charitable donations, they feel morally pushed to being vegetarian. That’s okay. People who want to offset meat-eating believe that, along with their charitable donations, they feel morally pushed to offset not being vegetarian. That’s also okay. As long as they’re not taking it out of the money they’ve pledged to effective charity, it’s not EA’s business whether they want to do that or not, just as it’s not EA’s business whether they become vegetarian or tip the waiter or behave respectfully to their parents or refuse to take hot showers. Other forms of morality aren’t in competition with EA and don’t subvert EA. If anything they contribute to the general desire to build a more moral world.
[written when very tired]
They can be in competition for EA, or subvert it. I think most do, if you follow them to their conclusions. Philanthrolocalism is a straightforward example of a philanthropic practice that seems to be in direct conflict with EA. But more broadly, many ethical frameworks like moral absolutism come into conflict with EA ideas pretty fast. You can say most EAs don’t only do EA things, and I’d agree with you. And you can say people shouldn’t let EA ideas determine all their behaviors, and I’d also agree with you.
And additionally, for most ideologies, most people fall short much of the time. Christians sin, feminists accidentally support the patriarchy, etc. That doesn’t mean sinning isn’t antithetical to being a good Christian or supporting the patriarchy to being a good feminist. You can expect people to fall short, and accept them, and not blame them, and celebrate their efforts anyway, without pretending those things were good or right.
Since when is EA about buying bednets being the bare minimum? That seems like an unusual definition of EA. Many EAs think obligation framings around giving are wrong or not useful. EA is about doing as much good as possible. EAs try to figure out how to do that, and fall short, and that’s to be expected, and great that they try! But an activity one knows doesn’t do the most good (directly or indirectly) should not be called EA.
From all this, you could continue to press your argument that they’re merely orthogonal. I might have agreed, until I started seeing EAs trying to convince other EAs to do ethical offsetting in EA fora and group discussions. At that point, it’s being billed (I think) as an EA activity and taking up EA-allocated resources with specifically non-EA principles (in particular, I think practices driving (probably already conscientious!) individual to focus on their harm committed rather than seeking out great sources of suffering has been one of the most counterproductive habits of general do-goodery in recent history).
Without EA already existing, ethical offsetting may have been a step in the right direction (I think it’s probably 35% likely that spreading the practice was net positive). With EA, and amongst EAs, I think it’s a big step back.
That said, I agree with you that:
I think “do as much good as possible” is not the best framing, since it means (for example) that an EA who eats at a restaurant is a bad EA, since they could have eaten ramen instead and donated the difference to charity. I think it’s counterproductive to define this in terms of “well, I guess they failed at EA, but everyone fails at things, so that’s fine”; a philosophy that says every human being is a failure and you should feel like a failure every time you fail to be superhuman doesn’t seem very friendly (see also my response to Squark above).
My interpretation of EA is “devote a substantial fraction of your resources to doing good, and try to use them as effectively as possible”. This interpretation is agnostic about what you do with the rest of your resources.
Consider the decision to become vegetarian. I don’t think anybody would think of this as “anti-EA”. However, it’s not very efficient—if the calculations I’ve seen around are correct, then despite being a major life choice that seriously limits your food options, it’s worth no more than a $5 − 50 donation to an animal charity. This isn’t “the most effective thing” by any stretch of the imagination, so are EAs still allowed to do it? My argument would be yes—it’s part of their personal morality that’s not necessarily subsumed by EA, and it’s not hurting EA, so why not?
I feel the same way about offsetting nonvegetarianism. It may not be the most effective thing any more than vegetarianism itself is, but it’s part of some people’s personal morality, and it’s not hurting EA. Suppose people in fact spend $5 offsetting nonvegetarianism. If that $5 wasn’t going to EA charity, it’s not hurting EA for the person to give it to offsets instead of, I don’t know, a new bike. If you criticize people for giving $5 in offsets, but not for any other non-charitable use of their money, then that’s the fallacy in this comic: https://xkcd.com/871/
Let me put this another way. Suppose that somebody who feels bad about animal suffering is currently offsetting their meat intake, using money that they would not otherwise give to charity. What would you recommend to that person?
Recommending “stop offsetting and become vegetarian” results in a very significant decrease in their quality of life for the sake of gaining them an extra $5, which they spend on ice cream. Assuming they value not-being-vegetarian more than they value ice cream, this seems strictly worse.
Recommending “stop offsetting but don’t become vegetarian” results in them donating $5 less to animal charities, buying an ice cream instead, and feeling a bit guilty. They feel worse (they prefer not feeling guilty to getting an ice cream), and animals suffer more. Again, this seems strictly worse.
The only thing that doesn’t seem strictly worse is “stop offsetting and donate the $5 to a charity more effective than the animal charity you’re giving it to now”. But why should we be more concerned about making them give the money they’re already using semi-efficiently to a more effective charity, as opposed to starting with the money they’re spending on clothes or games or something, and having the money they’re already spending pretty efficiently be the last thing we worry about redirecting?
Aren’t you kind of not disagreeing at all here?
The way I understand it, Scott claims that using your non-EA money for ethical offsetting is orthogonal to EA because you wouldn’t have used that money for EA anyway, and Claire claims that EAs suggesting ethical offsetting to people as an EA-thing to do is antithetical to EA because it’s not the most effective thing to do (with your EA money).
The two claims don’t seem incompatible with each other, unless I’m missing something.
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.
I gave the example of giving 10% to bed nets because that’s an especially clear example of a division between charitable and non-charitable money—eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there’s no cost to EA to giving that to offsets instead. I know many other EAs work this way too.
If you believe this isn’t enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there’s such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you “should have” given to charity.
But if you’re not working off a model where you have to agonize over everything, I’m not sure why you should agonize over offsets.
I don’t think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don’t reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
No. Have you tried it? I have. It works fine for me.
Maybe some people are too addicted to modern comforts or maybe they can’t handle the stress and pity they feel when thinking about charity. Sucks for them, but it’s a pragmatic issue which doesn’t directly change the moral issue.
(Two years later...) I have tried it. It’s a disaster for me. Every time I buy food, I think, “Someone else needs this food more than me,” which is an accurate statement but takes me to a dark place.
This seems hard; sorry to hear about it :-/
For what it’s worth, I’ve found self-laceration like this to be both really bad for my mental health and really bad for my personal efficacy.
Rather. That’s why I’ve donated a set percentage for about a decade now. “Set and forget” direct debits are both easier and more effective than constantly questioning which expenses are strictly necessary and which are luxuries. Budgeting how much goes to charity and how much goes to my expenses also makes it easier to get along with friends and family. “Sorry, that’s not in the budget” is easier than “Sorry, visiting you is less important than deworming strangers’ children.”
It seems straightforward to realize that you need food so that you can go about your business of making the world better. A soldier in WWII did not feel some kind of moral pain at the fact that he was getting more meat in his rations than the civilians back home. To agonize or “self-lacerate” about this common-sense logic is an abnormal pathology which is specific to certain types of people who join EA. So I understand that it doesn’t work for you, but I think that’s not representative of how most people will think, and it’s worth making a real effort to learn to get along with the rational line of thought.
I think characterizing thought-patterns as “abnormal” isn’t helpful for the person you’re addressing, and isn’t good for our community’s discourse.
Especially when the thought-pattern in question is fairly common around these parts.
Also “how most people think” isn’t a good benchmark for “how ought we think.”
Well it is not normal. That’s what abnormal means. I think that the most helpful thing is to tell the truth. I have abnormal thought patterns too, it doesn’t perturb me to recognize it.
No, that is exactly when it is most important to say “hey, this is not a foregone conclusion, you are in a bit of an echo chamber”.
Sure, what is rational is a good benchmark for how we should think, and it’s rational to eschew hard rules about what percentage of your money is luxurious versus what percentage is charitable.
I am using “how most people think” as a good benchmark for how we can think, and what I am pointing out here is that it is possible to adopt the rational way of thinking without going crazy and self-flagellating.
This reads a bit like “hey, I have the same thing you’re having, but it’s not a problem for me. Maybe if you just snapped out of it, it wouldn’t be a problem for you either!”
I think this sort of framing lacks compassion & can exacerbate things.
I don’t follow this; could you expand on it a little?
But I didn’t say “Maybe if you just snapped out of it, it wouldn’t be a problem for you either,” I said it was abnormal.
If you have a better way of framing the same facts, feel free to present it.
Well there isn’t any basis for it, and it contradicts consequentialism, it contradicts deontology, really I can’t think of any framework that says that you should make a budget such that a percentage of your money is a carte blanche gift to you that is independent of the considerations of benevolence and distributive justice. In all sensible moral theories, the needs of others count as a pro tanto reason to donate any amount of your money.
I think a relevant test here is “Is this better than saying nothing at all?”
It conveys the truth, which is a good reason to presume that it is.
“First, is it true? Second, is it kind? Third, is it necessary?”
Yes, yes, and yes. In Scott’s post he defines unkindness as anger or sarcasm—not the use of words like “abnormal” that just tickle us the wrong way.
But, like… What you said made me feel bad and was also unhelpful. I gained nothing from it, and lost a good mood. So why say it?
If you had suggested a useful resource or alternative, I would have thought your comment had merit.
Alternatively, you could have shown compassion by reflecting back what you heard—saying something like, “It sounds like making trade-offs on a daily basis is very emotional for you, so you donate a set percentage to cope. That might be the best solution for you right now. However, that doesn’t mean it’s the best solution for everyone.”
+1 to Khorton.
This could be a good opportunity for kbog to reflect and maybe update.
But I predict that they’ll instead double-down on their position...
Obviously we don’t always make comments that help the other person; your comment, for instance, did not help me at all, because I am 100% content with abolishing the charitable/non-charitable distinction in my budget, and need no help from anyone with figuring it out. Yet you made your comment nonetheless, presumably for the benefit of others, so they might know your experience, or for the benefit of me, just that I might know more about your experience. Likewise, I made my comment for the benefit of anyone else who is reading to persuade them that your experience is atypical, and to persuade you that your experience is atypical.
I didn’t aim to make you feel bad.
But I don’t feel compassion for people just because they have arrived at some kind of existential angst, I feel compassion for people when they have a more severe problem, so if I expressed sorrow here then I’d be dishonest.
I quite clearly said “I understand that it doesn’t work for you.” All you are doing is pleading for more cushions around my words. Such effort would be better spent thinking about whether my statements are correct or not, or just moving on with your life.
Likewise, effort on my part is better spent on other things besides adding such cushions. You clearly said yourself that such decisions are very emotional for you, so it’s obvious to every reader that they are very emotional for you, and if you have a basic level of respect for my reading comprehension abilities then you will presume that I understood your statement that such decisions are very emotional for you, and obviously I did nothing to disagree with that fact—it is, after all, not the sort of thing that can be reasonably disagreed with from a distance. So to merely repeat this obvious fact, which is understood by everyone to be understood by everyone, would be a waste of time.
But I don’t merely believe it’s not the best solution for everyone, I believe it’s the wrong solution for most people, so this would be an inaccurate representation of my position.
An important point. Failing to take this into account comes across as morally narrow.