Let me be clear: I find the Bay Area EA Community on AI risk intellectually dissatisfying and have ever since I started my PhD in Berkeley. Contribution/complaint ratio is off, ego/skill ratio is off, tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off, intellectual diversity/monoculture/overly deferential patterns are really off.
I am not a “strong axiological longtermist” and weigh normative factors such as special obligations and, especially, desert.
The Bay Area EA Community was the only game in town on AI risk for a long time. I do hope AI safety outgrows EA.
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you that’s all EA is or was, I’m sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I’d want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it’s the only movement that can or should be a part of it.
tl;dr—To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as ‘EA’, when there’s a lot more to EA than that.
I don’t think Dan’s statement implies the existence of those fairly specific beliefs you must endorse to “count” as an EA. Given that there is no authoritative measure of who is / isn’t an EA, it is more akin to a social identity one can choose to embrace or reject.
It’s common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there’s no benefit for someone in that position to adopt an EA identity if they have any significant reservations.
I agree with some of this comment but I really don’t get the link to the paper you linked:
tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off
The paper seems to mostly be evidence that the benchmarks that, you and other who have been focused on certain kinds of ML experiments have created, are not really helping much with AI alignment.
I also disagree some with the methodology of this paper, but I have trouble seeing how its evidence of people doing too much armchair analyzing, when as far as I can tell the flaws with these benchmarks were the result of people doing too much “IDK what alignment is, but maybe if we measure this vaguely related thing it will help” and too little “man, I should really understand what I would learn if this benchmark improved and whether it would cause me to actually update the system that has improved on this benchmark is more aligned and less likely to cause catastrophic consequences”.
Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a “strong axiological longtermist.” Would that be a fair statement?
Also, although it took some time, I’ve met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It’s just that they don’t publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there’s the illusion that AI Safety is dominated by EA-trained philosophers and engineers.
Let me be clear: I find the Bay Area EA Community on AI risk intellectually dissatisfying and have ever since I started my PhD in Berkeley. Contribution/complaint ratio is off, ego/skill ratio is off, tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off, intellectual diversity/monoculture/overly deferential patterns are really off.
I am not a “strong axiological longtermist” and weigh normative factors such as special obligations and, especially, desert.
The Bay Area EA Community was the only game in town on AI risk for a long time. I do hope AI safety outgrows EA.
Many people across EA strongly agree with you about the flaws of the Bay Area AI risk EA position/orthodoxy,[1] across many of these dimensions, and I strongly disagree with the implication you have to be a strong axiological longtermist, believe that you have no special moral obligations to others, and be living in the Bay while working on AI risk to count as an EA.
To that extent that was the impression they gave you that’s all EA is or was, I’m sorry. Similarly if this led to bad effects either explicitly or implicitly on the directions of, or implications for, your work as well as the future of AI Safety as a cause. And even if I viewed AI Safety as a more important cause than I currently do, I still feel like I’d want EA to share the task of shaping a beneficial future of AI with the rest of the world, and pursue more co-operative strategies rather than assuming it’s the only movement that can or should be a part of it.
tl;dr—To me, you seem to be overindexing on a geograhically concentrated, ideologically undiverse group of people/institutions/ideas as ‘EA’, when there’s a lot more to EA than that.
I am one such person who is feeling ever more that this group of EA has utterly lost its mandate
I don’t think Dan’s statement implies the existence of those fairly specific beliefs you must endorse to “count” as an EA. Given that there is no authoritative measure of who is / isn’t an EA, it is more akin to a social identity one can choose to embrace or reject.
It’s common for an individual to decide not to identify with a certain community because of their aversion to a subpart or subgroup of that community. This remains true even where the subgroup is only a minority of the larger community, or the subpart is only a minor-ish portion of the community ideology.
My guess is that public identification as an EA is not a plus for the median established AI safety researcher, so there’s no benefit for someone in that position to adopt an EA identity if they have any significant reservations.
I agree with some of this comment but I really don’t get the link to the paper you linked:
The paper seems to mostly be evidence that the benchmarks that, you and other who have been focused on certain kinds of ML experiments have created, are not really helping much with AI alignment.
I also disagree some with the methodology of this paper, but I have trouble seeing how its evidence of people doing too much armchair analyzing, when as far as I can tell the flaws with these benchmarks were the result of people doing too much “IDK what alignment is, but maybe if we measure this vaguely related thing it will help” and too little “man, I should really understand what I would learn if this benchmark improved and whether it would cause me to actually update the system that has improved on this benchmark is more aligned and less likely to cause catastrophic consequences”.
Thank you for your response, @Dan H . I understand that you do not agree with a lot of EA doctrine (for lack of a better word), but that you are a Longtermist, albeit not a “strong axiological longtermist.” Would that be a fair statement?
Also, although it took some time, I’ve met a lot of scientists working on AI safety who have nothing to do with EA or Longtermism or AI doom scenarios. It’s just that they don’t publish open letters, create political action funds, or have any funding mechanism similar to OpenPhilanthropy or similarly-minded billionaire donors like Jaan Tallinn and Vitalik Buterin. As a result, there’s the illusion that AI Safety is dominated by EA-trained philosophers and engineers.