Like ThomasW, this also reads to me like the “Berkeley take on things” (which you do acknowledge, thanks) and if you live in a different EA hub, you’d say different things. Being in Oxford, I’d say many are focused on longtermism, but not necessarily AI safety, per se.
Claim 2, in particular, feels a little strong to me. If the claim was “Many EA leaders believe that making AI go well is one of our highest priorities, if not the highest priority”, I think this would be right.
I also think a true background claim I wish was here is “there are lots of EAs working on lots of different things, and many disagree with each other. Many EAs would disagree with several of my own claims here”.
Thanks for writing this up!
Like ThomasW, this also reads to me like the “Berkeley take on things” (which you do acknowledge, thanks) and if you live in a different EA hub, you’d say different things. Being in Oxford, I’d say many are focused on longtermism, but not necessarily AI safety, per se.
Claim 2, in particular, feels a little strong to me. If the claim was “Many EA leaders believe that making AI go well is one of our highest priorities, if not the highest priority”, I think this would be right.
I also think a true background claim I wish was here is “there are lots of EAs working on lots of different things, and many disagree with each other. Many EAs would disagree with several of my own claims here”.