Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I’m mostly recording them here for my own consolidation
EA as a bet on a general way of approaching how to do good, that is almost certainly wrong in at least some ways—rather than a claim that we’ve “figured out” how to do the most good (like, probably no one claims the latter, but sometimes newcomers tend to get this vibe). Different people in the community have different degrees of belief in the bet, and (like all bets) it can make sense to take it even if you still have a lot of uncertainty.
EA as about doing good on the current margin. That is, we’re not trying to work out the optimal allocation of altruistic resources in general, but rather: given how the rest of the world is spending its money and time to do good, which approaches could do with more attention? Corollary: you should expect to see EA behaviour changing over time (for this and other reasons). This is a feature not a bug.
EA as diverse in its ways of approaching how to do good. Some people work on global health and wellbeing. Others on animal welfare. Others on risks from climate change and advanced technology.
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I’m mostly recording them here for my own consolidation
EA as a bet on a general way of approaching how to do good, that is almost certainly wrong in at least some ways—rather than a claim that we’ve “figured out” how to do the most good (like, probably no one claims the latter, but sometimes newcomers tend to get this vibe). Different people in the community have different degrees of belief in the bet, and (like all bets) it can make sense to take it even if you still have a lot of uncertainty.
EA as about doing good on the current margin. That is, we’re not trying to work out the optimal allocation of altruistic resources in general, but rather: given how the rest of the world is spending its money and time to do good, which approaches could do with more attention? Corollary: you should expect to see EA behaviour changing over time (for this and other reasons). This is a feature not a bug.
EA as diverse in its ways of approaching how to do good. Some people work on global health and wellbeing. Others on animal welfare. Others on risks from climate change and advanced technology.
These frames can also apply to any specific cause area.
*like, I remember talking to a few people who became more sympathetic when I used these frames.
I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.
Agreed, thanks for the pushback!