I think in practice, EA is now an answer to the question of how to do the most good, and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”. This has a bunch of empirical claims baked into it.
But in practice, I don’t think we come up with answers anymore.
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities, even if they’re less important than the question of how to do the most good.
So I think the relevant empirical claims are baked in to identifying as an EA.
This is sort of getting into the thick EA vs thin EA idea that Ben Todd discussed once, but practically I think almost everyone who identifies as an EA mostly agrees with these areas being amongst top priorities. If you disagree too strongly you would probably not feel like part of the EA movement.
and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities
I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.
I call myself an EA. Others call me an EA. I don’t believe all of these “answer[s] to the question of how to do the most good.” In fact, I know several EAs, and I don’t think any of them believe all of these answers.
I really think EA is fundamentally cause-neutral, and that an EA could still be an EA even if all of their particular beliefs about how to do good changed.
I also identify as an EA and disagree to some extent with EA answers on cause prioritisation, but my disagreement is mostly about the extent to which they’re priorities compared to other things, and my disagreement isn’t too strong.
But it seems very unlikely for someone to continue to identify as an EA if they strongly disagree with all of these answers, which is why I think, in practice, these answers are part of the EA identity now (although I think we should try to change this, if possible).
Do you know an individual who identifies as an EA and strongly disagrees with all of these areas being priorities?
I think in practice, EA is now an answer to the question of how to do the most good, and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”. This has a bunch of empirical claims baked into it.
I see EA as the question of how to do the most good; we come up with answers, but they could change. It’s the question that’s fundamental.
But in practice, I don’t think we come up with answers anymore.
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities, even if they’re less important than the question of how to do the most good.
So I think the relevant empirical claims are baked in to identifying as an EA.
This is sort of getting into the thick EA vs thin EA idea that Ben Todd discussed once, but practically I think almost everyone who identifies as an EA mostly agrees with these areas being amongst top priorities. If you disagree too strongly you would probably not feel like part of the EA movement.
I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.
I call myself an EA. Others call me an EA. I don’t believe all of these “answer[s] to the question of how to do the most good.” In fact, I know several EAs, and I don’t think any of them believe all of these answers.
I really think EA is fundamentally cause-neutral, and that an EA could still be an EA even if all of their particular beliefs about how to do good changed.
Hmm.
I also identify as an EA and disagree to some extent with EA answers on cause prioritisation, but my disagreement is mostly about the extent to which they’re priorities compared to other things, and my disagreement isn’t too strong.
But it seems very unlikely for someone to continue to identify as an EA if they strongly disagree with all of these answers, which is why I think, in practice, these answers are part of the EA identity now (although I think we should try to change this, if possible).
Do you know an individual who identifies as an EA and strongly disagrees with all of these areas being priorities?
Not right now. (But if I met someone who disagreed with each of these causes, I wouldn’t think that they couldn’t be an EA.)