I’m a vegan existential AI safety researcher. I once identified as EA, now as EA-adjacent. So, superficially, I’m part of the problem you describe. However, my reasons for not identifying as EA anymore have nothing to with FTX or other PR concerns. It’s not a “mask”. I just have philosophical disagreements with EA, coming out of my own personal growth, that seem sufficiently significant to be acknowledged.
To be clear, I’m very grateful to EA donors and orgs for supporting my research. I think that both EAs in AI safety and EAs more broadly are doing tonnes of good, for which they genuinely deserve my and most of everyone’s gratitude and praise.
At the same time, it’s a perfectly legitimate personal choice to not identify as EA. Moreover, the case for the importance of AI X-safety doesn’t rest on EA assumptions (some of which I reject), but is defensible much more broadly. And, there is no reason that every individual or organization working on AI X-safety must identify as EA or recruit only EA-aligned personnel. Even if they have history with EA or funding from EA etc.
Let’s keep cooperating and accomplishing great things, but let’s also acknowledge each other’s right to ideological pluralism.
I intentionally stayed meta because I didn’t especially want to start an argument about EA premises. Concretely, my disagreements with EA are, that I don’t believe in any of:
Moral realism
Radical impartiality
Utilitarianism
Longtermism
I view improving the world as an enterprise of collective rationality / cooperation, not a moral imperative (I don’t believe in moral imperatives). I care much more about the people (and other creatures) closer to me in the social graph, but I also want to cooperate with other people for mutual gain, and in particular endorse/promote social norms that create incentives beneficial for most of everyone (e.g. reward people for helping others / improving the world).
Why I changed some of my views in this particular direction is a long story, but it involved a lot of reflection and thinking about my preferences on different levels of abstraction (from “how do I feel about such-and-such particular situation” to “what could an abstract mathematical formalization of my preferences look like”).