Sure, though I still think it makes it misleading to say that the survey respondents think “EA should focus entirely on longtermism”.
Seems more accurate to say something like “everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies.”
To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is “let’s create a separate longtermist community, so that I can indeed invest in that in a way that doesn’t get diluted with all the other things that seem relatively unimportant to me”. If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don’t really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.
I’m strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding and it’s fine for you to discuss why you think longtermism is valuable, but it’s not as though anyone gets to tell the community what values the community should have.
The idea that there is a single “good” which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests—if strong-longtermists’ interests really are incompatible with most of EA, as yours seem to be, that’s a huge problem—especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn’t think it was the case that there was such a split, but perhaps I am wrong.
I agree, EA is a movement of different but compatible values, and given its existence, I don’t want to force anything on it, or force anyone to change their values. It’s a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.
I don’t think my interests are incompatible with most of EA, and am not sure why you think that? I’ve clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.
But I think it’s important to be clear which of these benefits are gains from trade, vs. things I “intrinsically care about” (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn’t really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.
Sorry, this was unclear, and I’m both not sure that we disagree, and want to apologize if it seemed like I was implying that you haven’t done a tremendous amount for the community, and didn’t hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community.
I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment, but that point is even farther afield, and needs a different post and a far more in-depth discussion.
Agree it’s more accurate. How I see it: > Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.
Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement.
The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.
Sure, though I still think it makes it misleading to say that the survey respondents think “EA should focus entirely on longtermism”.
Seems more accurate to say something like “everyone agrees EA should focus on a range of issues, though people put different weight on different reasons for supporting them, including long & near term effects, indirect effects, coordination, treatment of moral uncertainty, and different epistemologies.”
To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.
To some degree my response to this situation is “let’s create a separate longtermist community, so that I can indeed invest in that in a way that doesn’t get diluted with all the other things that seem relatively unimportant to me”. If we had a large and thriving longtermist community, it would definitely seem bad to me to suddenly start investing into all of these other things that EA does that don’t really seem to check out (to me) from a utilitarian perspective, and I would be sad to see almost any marginal resources moved towards the other causes.
I’m strongly opposed to this, and think we need to be clear: EA is a movement of people with different but compatible values, dedicated to understanding and it’s fine for you to discuss why you think longtermism is valuable, but it’s not as though anyone gets to tell the community what values the community should have.
The idea that there is a single “good” which we can objectively find and then maximize is a bit confusing to me, given that we know values differ. (And this has implications for AI alignment, obviously.) Instead, EA is a collaborative endeavor of people with compatible interests—if strong-longtermists’ interests really are incompatible with most of EA, as yours seem to be, that’s a huge problem—especially because many of the people who seem to embrace this viewpoint are in leadership positions. I didn’t think it was the case that there was such a split, but perhaps I am wrong.
I think we don’t disagree?
I agree, EA is a movement of different but compatible values, and given its existence, I don’t want to force anything on it, or force anyone to change their values. It’s a great collaboration of a number of people with different perspectives, and I am glad it exists. Indeed the interests of different people in the community are pretty compatible, as evidenced by the many meta interventions that seem to help many causes at the same time.
I don’t think my interests are incompatible with most of EA, and am not sure why you think that? I’ve clearly invested a huge amount of my resources into making the broader EA community better in a wide variety of domains, and generally care a lot about seeing EA broadly get more successful and grow and attract resources, etc.
But I think it’s important to be clear which of these benefits are gains from trade, vs. things I “intrinsically care about” (speaking a bit imprecisely here). If I could somehow get all of these resources and benefits without having to trade things away, and instead just build something that was more directly aligned with my values of similar scale and level of success, that seems better to me. I think historically this wasn’t really possible, but with longtermist stuff finding more traction, I am now more optimistic about it. But also, I still expect EA to provide value for the broad range of perspectives under its tend, and expect that investing in it in some capacity or another will continue to be valuable.
Sorry, this was unclear, and I’m both not sure that we disagree, and want to apologize if it seemed like I was implying that you haven’t done a tremendous amount for the community, and didn’t hope for its success, etc. I do worry that there is a perspective (which you seem to agree with) that if we magically removed all the various epistemic issues with knowing about the long term impacts of decisions, longtermists would no longer be aligned with others in the EA community.
I also think that longtermism is plausibly far better as a philosophical position than as a community, as mentioned in a different comment, but that point is even farther afield, and needs a different post and a far more in-depth discussion.
Agree it’s more accurate. How I see it:
> Longtermists overwhelmingly place some moral weight on non-longtermist views and support the EA community carrying out some non-longtermist projects. Most of them, but not all, diversify their own time and other resources across longtermist and non-longtermist projects. Some would prefer to partake in a new movement that focused purely on longtermism, rather than EA.
Worth noting the ongoing discussions about how longtermism is better thought of / presented as a philosophical position rather than a social movement.
The argument is something like: just like effective altruists can be negative utilitarians or deontologists or average utilitarians, and just like they can have differing positions about the value of animals, the environment, and wild animal suffering, they can have different views about longtermism. And just like policymakers take different viewpoints into account without needing to commit to anything, longtermism as a position can exist without being a movement you need to join.