I expect that your search for a “unified resource” will be unsatisfying. I think people disagree enough on their threat models/expectations that there is no real “EA perspective”.
I agree that there is no real “EA perspective”, but it seems like there could be a unified doc that a large cluster of people end up roughly endorsing. E.g., I think that if Joe Carlsmith wrote another version of “Is Power-Seeking AI an Existential Risk?” in the next several years, then it’s plausible that a relevant cluster of people would end up thinking this basically lays out the key arguments and makes the right arguments. (I’m unsure what I currently think about the old version of the doc, but I’m guessing I’ll think it misses some key arguments that now seem more obvious.)
I agree that there is no real “EA perspective”, but it seems like there could be a unified doc that a large cluster of people end up roughly endorsing. E.g., I think that if Joe Carlsmith wrote another version of “Is Power-Seeking AI an Existential Risk?” in the next several years, then it’s plausible that a relevant cluster of people would end up thinking this basically lays out the key arguments and makes the right arguments. (I’m unsure what I currently think about the old version of the doc, but I’m guessing I’ll think it misses some key arguments that now seem more obvious.)