As I have before, I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they’ve done some useful analyses on a very limited budget. (2) It’s my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They’ve been making good efforts to get more relevant talent into the field—e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They’re less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they’ve never really had the opportunity to “show what they can do” so to speak—I’m quite curious as to what they could achieve with a bigger budget and a little more long-term security.
So I think there’s an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value. The argument is perhaps less strong on the grounds of AI safety alone, but stronger for GCR more broadly.
Thanks for a very detailed and informative post.
As I have before, I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they’ve done some useful analyses on a very limited budget. (2) It’s my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They’ve been making good efforts to get more relevant talent into the field—e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They’re less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they’ve never really had the opportunity to “show what they can do” so to speak—I’m quite curious as to what they could achieve with a bigger budget and a little more long-term security.
So I think there’s an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value. The argument is perhaps less strong on the grounds of AI safety alone, but stronger for GCR more broadly.