As a matter of pragmatic trade-offs and community health, I broadly agree with this. However, I do also think it’s important to point out that you[1] don’t have to throw out all your EA principles when making “emotional” donating decisions. If it’s necessary for your happiness to donate to cause area X, you can still try to make your donation to X as effective as possible, within your time constraints.
I suspect that the best way to do this is often to think about how narrow the cause area you’re drawn to actually is. Would you feel bad if you donated to anything other than exactlyX, narrowly defined? This is an important question, since if X is the national cause du jour it’s likely to be getting a lot of attention and funding, and even small extensions in X beyond what’s in the news every day are likely to open up big opportunities to have more impact. The more you can comfortably extend the remit for your donation, the more impact you’re likely to have[2].
This has come up in both of the recentquestions on the Forum about racial injustice, and not only in comments by me. If your goal is to tackle racism or discrimination broadly, there’s no particular reason to limit your concern to recent high-profile cases in the US. I’d predict that dollars going towards, say, helping largely-forgotten Rohingya refugees would be far more cost-effective than contributing even more money to a cause that’s currently all over the global news. Even better would be to find a group that’s been the victims of horrific attacks that no-one in the West has heard of.
Of course, none of this is to say you have to do that. We’re assuming ex hypothesi that this is “discretionary” donating that doesn’t count towards your GWWC pledge or whatever, and if the only way for you to not feel guilty is to donate to combating something very specific, like reducing police brutality against racial minorities in the USA, then you should (within this framing) do that. (Though even there there’s a lot of value in thinking about how to do that as effectively as possible, and I’m glad some people have been doing that.)
Overall, for this kind of discretionary/personal-wellbeing donating, I think an algorithm like the following would probably be a good idea:
Consider the cause area you feel like you need to contribute to. Think about a few ways you might extend it (e.g. in space, in time, in mechanism, in species). Would you feel okay with making those extensions? If so, do so, and repeat until your remit is as wide as you can make it without feeling you’re betraying the cause (or whatever other feelings are spurring these donations).
Within that remit, think/read/ask about how you could make your donation as effectively as possible, within whatever time and emotional limits apply.
Make your donation in accordance with the findings from (2).
Trivially, the value of the highest-impact opportunity will monotonically increase as the breadth of the remit expands; at full generality, you’re just back to EA again, but the principle applies to partial extensions as well.
I agree that EA thinking within a cause area is important, but the racist police brutality crisis in the USA is the particular motivating cause area I wrote this post about, and the Rohingya don’t enter into that.
Given the framing of discretionary donations, how broad you’re willing to go with your spending is entirely up to you. Broader means (sometimes much) more impact but less of...whatever hard-to-exactly-define thing it is that motivates people to donate to specific causes rather than for general impact. I imagine different people will set their thresholds for that trade-off in different places. My main point is that it would be good to explicitly consider how one might broaden the remit, not that there is necessarily a right or wrong place to put the boundary.
On the object level, there is is a reading of your comment here that I do disagree with quite strongly, but it doesn’t seem terribly valuable to me to argue about it here.
As a matter of pragmatic trade-offs and community health, I broadly agree with this. However, I do also think it’s important to point out that you[1] don’t have to throw out all your EA principles when making “emotional” donating decisions. If it’s necessary for your happiness to donate to cause area X, you can still try to make your donation to X as effective as possible, within your time constraints.
I suspect that the best way to do this is often to think about how narrow the cause area you’re drawn to actually is. Would you feel bad if you donated to anything other than exactly X, narrowly defined? This is an important question, since if X is the national cause du jour it’s likely to be getting a lot of attention and funding, and even small extensions in X beyond what’s in the news every day are likely to open up big opportunities to have more impact. The more you can comfortably extend the remit for your donation, the more impact you’re likely to have[2].
This has come up in both of the recent questions on the Forum about racial injustice, and not only in comments by me. If your goal is to tackle racism or discrimination broadly, there’s no particular reason to limit your concern to recent high-profile cases in the US. I’d predict that dollars going towards, say, helping largely-forgotten Rohingya refugees would be far more cost-effective than contributing even more money to a cause that’s currently all over the global news. Even better would be to find a group that’s been the victims of horrific attacks that no-one in the West has heard of.
Of course, none of this is to say you have to do that. We’re assuming ex hypothesi that this is “discretionary” donating that doesn’t count towards your GWWC pledge or whatever, and if the only way for you to not feel guilty is to donate to combating something very specific, like reducing police brutality against racial minorities in the USA, then you should (within this framing) do that. (Though even there there’s a lot of value in thinking about how to do that as effectively as possible, and I’m glad some people have been doing that.)
Overall, for this kind of discretionary/personal-wellbeing donating, I think an algorithm like the following would probably be a good idea:
Consider the cause area you feel like you need to contribute to. Think about a few ways you might extend it (e.g. in space, in time, in mechanism, in species). Would you feel okay with making those extensions? If so, do so, and repeat until your remit is as wide as you can make it without feeling you’re betraying the cause (or whatever other feelings are spurring these donations).
Within that remit, think/read/ask about how you could make your donation as effectively as possible, within whatever time and emotional limits apply.
Make your donation in accordance with the findings from (2).
In all cases, I’m using “you” in the general sense, not specifically to address orthonormal.
Trivially, the value of the highest-impact opportunity will monotonically increase as the breadth of the remit expands; at full generality, you’re just back to EA again, but the principle applies to partial extensions as well.
I agree that EA thinking within a cause area is important, but the racist police brutality crisis in the USA is the particular motivating cause area I wrote this post about, and the Rohingya don’t enter into that.
Given the framing of discretionary donations, how broad you’re willing to go with your spending is entirely up to you. Broader means (sometimes much) more impact but less of...whatever hard-to-exactly-define thing it is that motivates people to donate to specific causes rather than for general impact. I imagine different people will set their thresholds for that trade-off in different places. My main point is that it would be good to explicitly consider how one might broaden the remit, not that there is necessarily a right or wrong place to put the boundary.
On the object level, there is is a reading of your comment here that I do disagree with quite strongly, but it doesn’t seem terribly valuable to me to argue about it here.