Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.
I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they’re not unique to EA. (Although the ‘intellectual puzzle’ piece may be somewhat distinctive with EA).
We shouldn’t beat ourselves up about these motivations, IMHO. There’s no shame in them. We’re hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that’s a huge win.
We don’t need pristine motivations. Don’t buy into the Kantian nonsense that only disinterested or purely ‘altruistic’ reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It’s not an evolutionarily stable strategy, in game theory terms.
We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.
The only trouble comes if we try to pretend that none of these motivations should have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA.
If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.
This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.
If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.
What’re the costs/benefits of reversing this shame? By “reversing shame” I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.
Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.
I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they’re not unique to EA. (Although the ‘intellectual puzzle’ piece may be somewhat distinctive with EA).
We shouldn’t beat ourselves up about these motivations, IMHO. There’s no shame in them. We’re hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that’s a huge win.
We don’t need pristine motivations. Don’t buy into the Kantian nonsense that only disinterested or purely ‘altruistic’ reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It’s not an evolutionarily stable strategy, in game theory terms.
We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.
The only trouble comes if we try to pretend that none of these motivations should have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA.
This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.
What’re the costs/benefits of reversing this shame? By “reversing shame” I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.