Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you that’s all EA is or was, I’m sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I’d want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it’s the only movement that can or should be a part of it.
tl;dr—To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as ‘EA’, when there’s a lot more to EA than that.
I don’t think Dan’s statement implies the existence of those fairly specific beliefs you must endorse to “count” as an EA. Given that there is no authoritative measure of who is / isn’t an EA, it is more akin to a social identity one can choose to embrace or reject.
It’s common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there’s no benefit for someone in that position to adopt an EA identity if they have any significant reservations.
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you that’s all EA is or was, I’m sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I’d want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it’s the only movement that can or should be a part of it.
tl;dr—To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as ‘EA’, when there’s a lot more to EA than that.
I am one such person who is feeling ever more that this group of EA has utterly lost its mandate
I don’t think Dan’s statement implies the existence of those fairly specific beliefs you must endorse to “count” as an EA. Given that there is no authoritative measure of who is / isn’t an EA, it is more akin to a social identity one can choose to embrace or reject.
It’s common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there’s no benefit for someone in that position to adopt an EA identity if they have any significant reservations.