So my intuition is that the two main important updates EA has undergone are “it’s not that implausible that par-human AI is coming in the next couple of decades” and “the world is in fact dropping the ball on this quite badly, in the sense that maybe alignment isn’t super hard, but to a first approximation no one in the field has checked.”
(Which is both an effect and a cause of updates like “maybe we can figure stuff out in spaces where the data is more indirect and hard-to-interpret”, “EA should be weirder”, “EA should focus more on research and intellectual work and technical work”, etc.)
But I work in AI x-risk and naturally pay more attention to that stuff, so maybe I’m missing other similarly-deep updates that have occurred. Like, maybe there was a big update at some point about the importance of biosecurity? My uninformed guess is that if we’d surveyed future EA leaders in 2007, they already would have been on board with making biosecurity a top global priority (if there are tractable ways to influence it), whereas I think this is a lot less true for AI alignment.
https://www.openphilanthropy.org/research/three-key-issues-ive-changed-my-mind-about/
Came here to cite the same thing! :)
Note that Dustin Moskovitz says he’s not a longtermist, and “Holden isn’t even much of a longtermist.”
So my intuition is that the two main important updates EA has undergone are “it’s not that implausible that par-human AI is coming in the next couple of decades” and “the world is in fact dropping the ball on this quite badly, in the sense that maybe alignment isn’t super hard, but to a first approximation no one in the field has checked.”
(Which is both an effect and a cause of updates like “maybe we can figure stuff out in spaces where the data is more indirect and hard-to-interpret”, “EA should be weirder”, “EA should focus more on research and intellectual work and technical work”, etc.)
But I work in AI x-risk and naturally pay more attention to that stuff, so maybe I’m missing other similarly-deep updates that have occurred. Like, maybe there was a big update at some point about the importance of biosecurity? My uninformed guess is that if we’d surveyed future EA leaders in 2007, they already would have been on board with making biosecurity a top global priority (if there are tractable ways to influence it), whereas I think this is a lot less true for AI alignment.