Let me be clear: I find the Bay Area EA Community on AI risk intellectually dissatisfying and have ever since I started my PhD in Berkeley. Contribution/complaint ratio is off, ego/skill ratio is off, tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off, intellectual diversity/monoculture/overly deferential patterns are really off.
I am not a “strong axiological longtermist” and weigh normative factors such as special obligations and, especially, desert.
The Bay Area EA Community was the only game in town on AI risk for a long time. I do hope AI safety outgrows EA.
Let me be clear: I find the Bay Area EA Community on AI risk intellectually dissatisfying and have ever since I started my PhD in Berkeley. Contribution/complaint ratio is off, ego/skill ratio is off, tendency to armchair analyze deep learning systems instead of having experiments drive decisions was historically off, intellectual diversity/monoculture/overly deferential patterns are really off.
I am not a “strong axiological longtermist” and weigh normative factors such as special obligations and, especially, desert.
The Bay Area EA Community was the only game in town on AI risk for a long time. I do hope AI safety outgrows EA.