Thank you for all the great and detailed analysis again. +1 on GCRI’s great work, on a shoestring budget, this year. I think my comment from last year’s version of this post holds word for word, but more strongly (copied below for convenience). I would note that I believe Seth and others on his team are working on some very limited funding horizons, which considerably limits what they can do—EA support would likely make a very big positive difference.
“I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they’ve done some useful analyses on a very limited budget. (2) It’s my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They’ve been making good efforts to get more relevant talent into the field—e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They’re less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they’ve never really had the opportunity to “show what they can do” so to speak—I’m quite curious as to what they could achieve with a bigger budget and a little more long-term security.
So I think there’s an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value.”
Thank you for all the great and detailed analysis again. +1 on GCRI’s great work, on a shoestring budget, this year. I think my comment from last year’s version of this post holds word for word, but more strongly (copied below for convenience). I would note that I believe Seth and others on his team are working on some very limited funding horizons, which considerably limits what they can do—EA support would likely make a very big positive difference.
“I would encourage EAs to think about supporting GCRI, whether on AI safety or (especially) GCR more broadly. (1) As you note, they’ve done some useful analyses on a very limited budget. (2) It’s my view that a number of their members (Seth Baum and Dave Denkenberger in particular in my case) have been useful and generous information and expertise resources to this community over the last couple of years (Seth has both provided useful insights, and made very useful networking connections, for me and others in FHI and CSER re: adjacent fields that we could usefully engage with, including biosecurity, risk analysis, governance etc). (3) They’ve been making good efforts to get more relevant talent into the field—e.g. one of their research associates, Matthias Maas, gave one of the best-received contributed talks at our conference this week (on nuclear security and governance). (4) They’re less well-positioned to secure major funding from other sources than some of the orgs above. (5) As a result of (4), they’ve never really had the opportunity to “show what they can do” so to speak—I’m quite curious as to what they could achieve with a bigger budget and a little more long-term security.
So I think there’s an argument to be made on the grounds of funding opportunity constraints, scaleability, and exploration value.”
Also, +1 on the great work being done by AI impacts (also on a shoestring!)