But then I’ve found that when I meet “core EAs”, eg—people working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than I’d expect, and this consensus does not seem to be present. I’m not sure why this discrepancy exists and I’m not sure how this could be fixed—maybe staff at these orgs could publish their “cause ranking” lists.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil’s funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil’s funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.