No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference.
I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it’d be less moral to donate to climate charities instead.
No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference.
True; this still means you’re doing something with the “profit” from that extra time and not just letting the information sit in your head. You’re putting it into an impactful job (and not playing videogames) or you’re using the money to mitigate the damage.
I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it’d be less moral to donate to climate charities instead.
I think there are at least two points against believing this.
First, you’re directly harming the world in a specific way by flying instead of taking the train, and you don’t want to take a moral position where it’s ok to harm some people in order to help others “more effectively”.
Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you’re trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. It’s easier just to see them as the most moral, no matter the circumstances, but I think that’s dangerous.
you don’t want to take a moral position where it’s ok to harm some people in order to help others “more effectively”.
This is not a full defense of my normative ethics, but I think it’s reasonable to “pull” in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen.
In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc.
Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you’re trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. [emphasis mine]
I don’t personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don’t. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels.
*for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)
Just to make this explicit: that would imply donating that value in addition to those 100 USD.
No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference.
I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it’d be less moral to donate to climate charities instead.
True; this still means you’re doing something with the “profit” from that extra time and not just letting the information sit in your head. You’re putting it into an impactful job (and not playing videogames) or you’re using the money to mitigate the damage.
I think there are at least two points against believing this.
First, you’re directly harming the world in a specific way by flying instead of taking the train, and you don’t want to take a moral position where it’s ok to harm some people in order to help others “more effectively”.
Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you’re trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. It’s easier just to see them as the most moral, no matter the circumstances, but I think that’s dangerous.
This is not a full defense of my normative ethics, but I think it’s reasonable to “pull” in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen.
In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc.
I don’t personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don’t. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels.
Donation splitting is possibly a relevant prior discussion here.
*for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)