What do you mean by “maximization”? I think it’s important to distinguish between:
(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.
(2) Maximizing within specific decision contexts: insofar as you’re trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.
As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)
On the broader themes, a lot of what you’re pointing to is potential conflicts between ethics and self-interest, and I think it’s pretty messed up to use the language of psychological “health” to justify a wanton disregard for ethics. Maybe it’s partly a cultural clash, and when you say things like “All perspectives are valid,” you really mean them in a non-literal sense?
The norm around donating 10% is one of the places where EA has constructed a sort of “safe harbour,” sending a message at least somewhat like as long as you give 10%, and under certain circumstances less, you should feel good about yourself as an EA / feel supported / etc. In other words, the community ethos implicitly discourages feeling guilty about “only” donating 10 percent.
I’m not as convinced we have established and effectively communicate that kind of safe harbour around certain other personal decisions, like career decisions. Thus, I don’t know if the soft 10 percent norm is representative of norms and pressures relating to demandingness.
To be fair, it’s easier to construct a safe harbour around money than around something like career decisions because we don’t have ten careers to allocate.
On the types of maximization: I think different pockets of EA are in different places on this. I think it’s not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there’s a natural internal logic to this: if doing some good well is good, surely doing more is better?
I mean, it’s undeniable that the best thing is best. It’s not like there’s some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.
I can see pathologies in both directions here. I don’t think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others? I think the crucial thing is just to frame it positively rather than negatively, and don’t get confused about where the baseline or zero-point properly lies.
I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn’t getting communicated explicitly in EA materials, but I think it’s an implicit message which many people receive. And although I think that it’s unhealthy to think that way, I don’t think people are dumb for receiving this message; I think it’s a pretty natural principled answer to reach, and the alternative answers feel unprincipled.
Given this, my worry is that expressing things like “EA aims to be maximizing in the second sense only” may be kind of gaslight-y to some people’s experience (although I agree that other people will think it’s a fair summary of the message they personally understood).
On the potential conflicts between ethics and self-interest: I agree that it’s important to be nuanced in how this is discussed.
But:
I think there’s a bunch of stuff here which isn’t just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.
What do you mean by “maximization”? I think it’s important to distinguish between:
(1) Hegemonic maximization: the (humanly infeasible) idea that every decision in your life should aim to do the most impartial good possible.
(2) Maximizing within specific decision contexts: insofar as you’re trying to allocate your charity budget (or altruistic efforts more generally), you should try to get the most bang for your buck.
As I understand it, EA aims to be maximizing in the second sense only. (Hence the norm around donating 10%, not some incredibly demanding standard.)
On the broader themes, a lot of what you’re pointing to is potential conflicts between ethics and self-interest, and I think it’s pretty messed up to use the language of psychological “health” to justify a wanton disregard for ethics. Maybe it’s partly a cultural clash, and when you say things like “All perspectives are valid,” you really mean them in a non-literal sense?
The norm around donating 10% is one of the places where EA has constructed a sort of “safe harbour,” sending a message at least somewhat like as long as you give 10%, and under certain circumstances less, you should feel good about yourself as an EA / feel supported / etc. In other words, the community ethos implicitly discourages feeling guilty about “only” donating 10 percent.
I’m not as convinced we have established and effectively communicate that kind of safe harbour around certain other personal decisions, like career decisions. Thus, I don’t know if the soft 10 percent norm is representative of norms and pressures relating to demandingness.
To be fair, it’s easier to construct a safe harbour around money than around something like career decisions because we don’t have ten careers to allocate.
On the types of maximization: I think different pockets of EA are in different places on this. I think it’s not unusual, at least historically, for subcultures to have some degree of lionization of 1). And there’s a natural internal logic to this: if doing some good well is good, surely doing more is better?
I mean, it’s undeniable that the best thing is best. It’s not like there’s some (coherent) alternative view that denies this. So I take it the real question is how much pressure one should feel towards doing the impartial best (at cost of significant self-sacrifice); whether the maximum should be viewed as the baseline for minimal acceptability, and anything short of it constitutes failure, or whether we rather aim to normalize something more modest and simply celebrate further good beyond that point as an extra bonus.
I can see pathologies in both directions here. I don’t think it makes sense to treat perfection as the baseline, such that any realistic outcome automatically qualifies as failure. For anyone to think that way would seem quite confused. (Which is not to deny that it can happen.) But also, it would seem a bit pathological to refuse to celebrate moral saints? Like, obviously there is something very impressive about moral heroism and extreme altruism that goes beyond what I personally would be willing to sacrifice for others? I think the crucial thing is just to frame it positively rather than negatively, and don’t get confused about where the baseline or zero-point properly lies.
I largely agree with this, but I feel like your tone is too dismissive of the issue here? Like: the problem is that the maximizing mindset (encouraged by EA), applied to the question of how much to apply the maximizing mindset, says to go all in. This isn’t getting communicated explicitly in EA materials, but I think it’s an implicit message which many people receive. And although I think that it’s unhealthy to think that way, I don’t think people are dumb for receiving this message; I think it’s a pretty natural principled answer to reach, and the alternative answers feel unprincipled.
Given this, my worry is that expressing things like “EA aims to be maximizing in the second sense only” may be kind of gaslight-y to some people’s experience (although I agree that other people will think it’s a fair summary of the message they personally understood).
On the potential conflicts between ethics and self-interest: I agree that it’s important to be nuanced in how this is discussed.
But:
I think there’s a bunch of stuff here which isn’t just about those conflicts, and that there is likely potential for improvements which are good on both prudential and impartial grounds.
Navigating real tensions is tricky, because we want to be cooperative in how we sell the ideas. cf. https://forum.effectivealtruism.org/posts/C665bLMZcMJy922fk/what-is-valuable-about-effective-altruism-implications-for