Adding on one of the points mentioned: I think that if you are driven to make AI go well because of EA, you’d probably like to do this in a very specific way (ie big picture: astronomical waste, x risks are way worse than catastrophic risks, avoiding s risks; smaller picture: what to prioritize in AIS, ect). This, I think, means that you want people (or at least the most impactful people) in the field to be ea/ea-adj (because what are the odds the values of an explicitly moral normie and EA will be perfectly correlated on the actionable things that really matter?).
Another related point is that a bunch of people might join AIS for clout/(future) power (perhaps not even consciously; finding out your real motivations are hard until there are big stakes!) and having been an EA for a bunch of time (and having shown flexibility about cause prio) before AIS is a good signal that you’re not (not a perfect one but some substantial evidence imo)
Great post.
Adding on one of the points mentioned: I think that if you are driven to make AI go well because of EA, you’d probably like to do this in a very specific way (ie big picture: astronomical waste, x risks are way worse than catastrophic risks, avoiding s risks; smaller picture: what to prioritize in AIS, ect). This, I think, means that you want people (or at least the most impactful people) in the field to be ea/ea-adj (because what are the odds the values of an explicitly moral normie and EA will be perfectly correlated on the actionable things that really matter?).
Another related point is that a bunch of people might join AIS for clout/(future) power (perhaps not even consciously; finding out your real motivations are hard until there are big stakes!) and having been an EA for a bunch of time (and having shown flexibility about cause prio) before AIS is a good signal that you’re not (not a perfect one but some substantial evidence imo)