I agree that consensus is unlikely regarding AI safety but I rather meant that it’s useful when individuals make clear claims about difficult questions, and that’s possible whether others agree with them or not. In AI Impacts’ interview series, such claims are made (e.g. here: https://aiimpacts.org/conversation-with-adam-gleave/).
Thanks for the encouragement!
My best guess is that no similar easily interpretable high-level conclusions exists in AI Safety, with a similar degree of confidence and consensus.
I agree that consensus is unlikely regarding AI safety but I rather meant that it’s useful when individuals make clear claims about difficult questions, and that’s possible whether others agree with them or not. In AI Impacts’ interview series, such claims are made (e.g. here: https://aiimpacts.org/conversation-with-adam-gleave/).
Got it, yeah I agree that’s really valuable.