you don’t want to take a moral position where it’s ok to harm some people in order to help others “more effectively”.
This is not a full defense of my normative ethics, but I think it’s reasonable to “pull” in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen.
In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc.
Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you’re trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. [emphasis mine]
I don’t personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don’t. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels.
*for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)
This is not a full defense of my normative ethics, but I think it’s reasonable to “pull” in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen.
In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc.
I don’t personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don’t. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels.
Donation splitting is possibly a relevant prior discussion here.
*for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)