These seem like reasonable points in isolation, but Iām not sure they answer the first question as actually posed. In particular:
Why would it necessarily be āa bunch of people new to the problem [spouting] whatever errors theyāve thought up in the first five seconds of thinkingā? Jayās spectrum of suggestions was wide and included a video or podcast. With that kind of thing there would appear to be ample scope to either have someone experienced with the problem doing the presenting or it could be reviewed by the people with relevant expertise before being released. A Big Event On Stage wasnāt the only thing on offer.
The actual question in the post was āI have little doubt that if I reached out to two random poverty or animal-focused EAs with the pitch āI can get a bunch of respected journalists, academics, and policymakers to hear the exact perspective you want me to share with them on our trusted/āprestigious platform,ā they would be pretty psyched about that (as I think they should be). So whatās so different about AI safety?ā I donāt really know what your answer to this is; is AI particularly vulnerable to the downsides you described (Why?). Or are the other areas of EA making a mistake?
āIf thereās something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape,...ā Iām pretty surprised that the start of this sentence is phrased as āif there isā rather than a āwhile there is certainlyā, so I want to check: is that deliberate; i.e. are you actually sceptical about whether thereās anything that national security higher-ups have to offer?
If you actually donāt think thereās anything to be gained from cooperation between AGI alignment people and national security people, the weakness of your other objections makes more sense, because they arenāt really your true rejection; your true rejection is that thereās no upside and some potential downsides.
These seem like reasonable points in isolation, but Iām not sure they answer the first question as actually posed. In particular:
Why would it necessarily be āa bunch of people new to the problem [spouting] whatever errors theyāve thought up in the first five seconds of thinkingā? Jayās spectrum of suggestions was wide and included a video or podcast. With that kind of thing there would appear to be ample scope to either have someone experienced with the problem doing the presenting or it could be reviewed by the people with relevant expertise before being released. A Big Event On Stage wasnāt the only thing on offer.
The actual question in the post was āI have little doubt that if I reached out to two random poverty or animal-focused EAs with the pitch āI can get a bunch of respected journalists, academics, and policymakers to hear the exact perspective you want me to share with them on our trusted/āprestigious platform,ā they would be pretty psyched about that (as I think they should be). So whatās so different about AI safety?ā I donāt really know what your answer to this is; is AI particularly vulnerable to the downsides you described (Why?). Or are the other areas of EA making a mistake?
āIf thereās something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape,...ā Iām pretty surprised that the start of this sentence is phrased as āif there isā rather than a āwhile there is certainlyā, so I want to check: is that deliberate; i.e. are you actually sceptical about whether thereās anything that national security higher-ups have to offer?
If you actually donāt think thereās anything to be gained from cooperation between AGI alignment people and national security people, the weakness of your other objections makes more sense, because they arenāt really your true rejection; your true rejection is that thereās no upside and some potential downsides.