I’d like to note that it is totally possible for someone to sincerely be talking about “cause-first EA” and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I’ve lost track of how many times people I’ve introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of… mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn’t ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles—something they can derive value from regardless of their ability to contribute to AI Alignment—and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the “cause-first” model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you’re not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they’re not going to feel like they’re part of your cause-first movement if they don’t feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like “EA is my tribe.” Ergo, less tension.
I’d like to note that it is totally possible for someone to sincerely be talking about “cause-first EA” and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I’ve lost track of how many times people I’ve introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of… mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn’t ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles—something they can derive value from regardless of their ability to contribute to AI Alignment—and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the “cause-first” model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you’re not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they’re not going to feel like they’re part of your cause-first movement if they don’t feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like “EA is my tribe.” Ergo, less tension.