A “cause first” movement has similar risks in vesting too much authority into a small elite, not much unlike a cult that comes together and supports each other and believes in some common goal and makes major strides to get closer to said goal, but ultimately burns out as cults often do due to treating their members too instrumentally as objects for the good of the cause. Fast and furious without the staying power of a religion.
That said, I’m also partial to the cause first approach, but man, stuff we have learnt like Oli Habryka’s podcast here made me strongly update more towards a member-first mindset which I think would have more firmly pushed against such revelations as being antithetical to caring for one’s members. Less deference and more thinking for yourself like Oli did seems like a better long-term strategy for any community’s long-term flourishing. EA’s recent wins don’t seem to counteract this intuition of mine strongly enough when you think decades or even generations into the future.
That said, if AI timelines really are really short, maybe we just need a fast and furious approach for now.
I’ve noticed that most of the tension that a “cause-first” model has is that it’s “cause” in the singular, and not “causes” (ie—people who join EA because of GHWB and Animal Welfare but then discover that at EAG everyone is only talking about AI). Marcus claims that EA’s success is based on cause-first, and brings examples:
”The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It’s responsible for a lot of wins for animals. It’s responsible for saving hundreds of thousands of lives. It’s about the only place out there that measures charities, and does so with a lot of rigor. ”
But I think that in practice, when someone today is calling for “cause-first EA”, they’re calling for “longtermist / AI safety focused EA”. The diversity of the examples above seem to support a “members-first EA” (at least as outlined in this post).
I’d like to note that it is totally possible for someone to sincerely be talking about “cause-first EA” and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I’ve lost track of how many times people I’ve introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of… mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn’t ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles—something they can derive value from regardless of their ability to contribute to AI Alignment—and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the “cause-first” model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you’re not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they’re not going to feel like they’re part of your cause-first movement if they don’t feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like “EA is my tribe.” Ergo, less tension.
A “cause first” movement has similar risks in vesting too much authority into a small elite, not much unlike a cult that comes together and supports each other and believes in some common goal and makes major strides to get closer to said goal, but ultimately burns out as cults often do due to treating their members too instrumentally as objects for the good of the cause. Fast and furious without the staying power of a religion.
That said, I’m also partial to the cause first approach, but man, stuff we have learnt like Oli Habryka’s podcast here made me strongly update more towards a member-first mindset which I think would have more firmly pushed against such revelations as being antithetical to caring for one’s members. Less deference and more thinking for yourself like Oli did seems like a better long-term strategy for any community’s long-term flourishing. EA’s recent wins don’t seem to counteract this intuition of mine strongly enough when you think decades or even generations into the future.
That said, if AI timelines really are really short, maybe we just need a fast and furious approach for now.
To emphasize Cornelis’s point:
I’ve noticed that most of the tension that a “cause-first” model has is that it’s “cause” in the singular, and not “causes” (ie—people who join EA because of GHWB and Animal Welfare but then discover that at EAG everyone is only talking about AI). Marcus claims that EA’s success is based on cause-first, and brings examples:
”The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It’s responsible for a lot of wins for animals. It’s responsible for saving hundreds of thousands of lives. It’s about the only place out there that measures charities, and does so with a lot of rigor. ”
But I think that in practice, when someone today is calling for “cause-first EA”, they’re calling for “longtermist / AI safety focused EA”. The diversity of the examples above seem to support a “members-first EA” (at least as outlined in this post).
I’d like to note that it is totally possible for someone to sincerely be talking about “cause-first EA” and simultaneously believe longtermism and AI safety should be the cause EA should prioritize.
As a community organizer I’ve lost track of how many times people I’ve introduced to EA initially get excited, but then disappointed that all we seem to talk about are effective charities and animals instead of… mental health or political action or climate change or world war 3 or <insert favourite cause here>.
And when this happens I try to take a member-first approach and ensure they understand what led to these priorities so that the new member can be armed to either change their own mind or argue with us or apply EA principles in their own work regardless of where it makes sense to do so.
A member-first approach wouldn’t ensure we have diversity of causes. We could in theory have a very members-first movement that only prioritizes AI Alignment. This is totally possible. The difference is that a members-first AI alignment focused movement would focus on ensuring its members properly understand cause agnostic EA principles—something they can derive value from regardless of their ability to contribute to AI Alignment—and based on that understand why AI Alignment just happens to be the thing the community mostly talks about at this point in time.
Our current cause-first approach is less concerned with teaching EA principles that are cause agnostic and more concerned with just getting skilled people of any kind, whether they care about EA principles or not, to work on AI Alignment or other important things. Teaching EA principles being mostly instrumental to said end goal.
I believe this is more the cause of the tension you describe in the “cause-first” model. It has less to do with only one cause being focused on. It has more to do with the fact that humans are tribalistic.
If you’re not going to put effort into making sure someone new is part of the tribe (in this case giving them the cause-agnostic EA principle groundwork they can take home and feel good about) then they’re not going to feel like they’re part of your cause-first movement if they don’t feel like they can contribute to said cause.
I think if we were more members-first we would see far more people who have nothing to offer to AI Safety research still nonetheless feel like “EA is my tribe.” Ergo, less tension.