(1) Calling yourself “longtermist” bakes empirical or refutable claims into an identity, making it harder to course-correct if you later find out you’re wrong.
Isn’t this also true of “Effective Altruist”? And I feel like from my epistemic vantage point, “longtermist” bakes in many fewer assumptions than “Effective Altruist”. I feel like there are just a lot of convergent reasons to care about the future, and the case for it seems more robust to me than the case for “you have to try to do the most good”, and a lot of the hidden assumptions in EA.
I think a position of “yeah, agree, I also think people shouldn’t call themselves EAs or rationalists or etc.” is pretty reasonable and I think quite defensible, but I feel a bit confused what your actual stance here is given the things you write in this post.
I think in practice, EA is now an answer to the question of how to do the most good, and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”. This has a bunch of empirical claims baked into it.
But in practice, I don’t think we come up with answers anymore.
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities, even if they’re less important than the question of how to do the most good.
So I think the relevant empirical claims are baked in to identifying as an EA.
This is sort of getting into the thick EA vs thin EA idea that Ben Todd discussed once, but practically I think almost everyone who identifies as an EA mostly agrees with these areas being amongst top priorities. If you disagree too strongly you would probably not feel like part of the EA movement.
and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities
I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.
I call myself an EA. Others call me an EA. I don’t believe all of these “answer[s] to the question of how to do the most good.” In fact, I know several EAs, and I don’t think any of them believe all of these answers.
I really think EA is fundamentally cause-neutral, and that an EA could still be an EA even if all of their particular beliefs about how to do good changed.
I also identify as an EA and disagree to some extent with EA answers on cause prioritisation, but my disagreement is mostly about the extent to which they’re priorities compared to other things, and my disagreement isn’t too strong.
But it seems very unlikely for someone to continue to identify as an EA if they strongly disagree with all of these answers, which is why I think, in practice, these answers are part of the EA identity now (although I think we should try to change this, if possible).
Do you know an individual who identifies as an EA and strongly disagrees with all of these areas being priorities?
I think it’s probably not great to have “effective altruist” as an identity either (I largely agree with Jonas’s post and the others I linked), although I disagree with the case you’re making for this.
I think that my case against EA-as-identity would be more on the (2) side, to use the framing of your post. Yours seems to be from (1), and based (partly) on the claim that “EA” requires the assumption that “you have to try to do the most good” (which I think is false). (I also think you’re pointing to the least falsifiable of the assumptions/cruxes I listed for longtermism.)
In practice, I probably slip more with “effective altruist” than I do with “longtermist,” and call people “EAs” more (including myself, in my head). This post is largely me thinking through what I should do—rather than explaining to readers how they should emulate me.
Isn’t this also true of “Effective Altruist”? And I feel like from my epistemic vantage point, “longtermist” bakes in many fewer assumptions than “Effective Altruist”. I feel like there are just a lot of convergent reasons to care about the future, and the case for it seems more robust to me than the case for “you have to try to do the most good”, and a lot of the hidden assumptions in EA.
I think a position of “yeah, agree, I also think people shouldn’t call themselves EAs or rationalists or etc.” is pretty reasonable and I think quite defensible, but I feel a bit confused what your actual stance here is given the things you write in this post.
What empirical claims are baked into EA?
I think in practice, EA is now an answer to the question of how to do the most good, and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”. This has a bunch of empirical claims baked into it.
I see EA as the question of how to do the most good; we come up with answers, but they could change. It’s the question that’s fundamental.
But in practice, I don’t think we come up with answers anymore.
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities, even if they’re less important than the question of how to do the most good.
So I think the relevant empirical claims are baked in to identifying as an EA.
This is sort of getting into the thick EA vs thin EA idea that Ben Todd discussed once, but practically I think almost everyone who identifies as an EA mostly agrees with these areas being amongst top priorities. If you disagree too strongly you would probably not feel like part of the EA movement.
I think some EAs would consider work on other areas like space governance and improving institutional decision-making highly impactful. And some might say that randomista development and animal welfare are less impactful than work on x-risks, even though the community has focussed on them for a long time.
I call myself an EA. Others call me an EA. I don’t believe all of these “answer[s] to the question of how to do the most good.” In fact, I know several EAs, and I don’t think any of them believe all of these answers.
I really think EA is fundamentally cause-neutral, and that an EA could still be an EA even if all of their particular beliefs about how to do good changed.
Hmm.
I also identify as an EA and disagree to some extent with EA answers on cause prioritisation, but my disagreement is mostly about the extent to which they’re priorities compared to other things, and my disagreement isn’t too strong.
But it seems very unlikely for someone to continue to identify as an EA if they strongly disagree with all of these answers, which is why I think, in practice, these answers are part of the EA identity now (although I think we should try to change this, if possible).
Do you know an individual who identifies as an EA and strongly disagrees with all of these areas being priorities?
Not right now. (But if I met someone who disagreed with each of these causes, I wouldn’t think that they couldn’t be an EA.)
Fair point, thanks!
I think it’s probably not great to have “effective altruist” as an identity either (I largely agree with Jonas’s post and the others I linked), although I disagree with the case you’re making for this.
I think that my case against EA-as-identity would be more on the (2) side, to use the framing of your post. Yours seems to be from (1), and based (partly) on the claim that “EA” requires the assumption that “you have to try to do the most good” (which I think is false). (I also think you’re pointing to the least falsifiable of the assumptions/cruxes I listed for longtermism.)
In practice, I probably slip more with “effective altruist” than I do with “longtermist,” and call people “EAs” more (including myself, in my head). This post is largely me thinking through what I should do—rather than explaining to readers how they should emulate me.
One thing I’m curious about—how do you effectively communicate the concept of EA without identifying as an effective altruist?