But then Iâve found that when I meet âcore EAsâ, egâpeople working at CEA, 80k, FHI etc, there is far more divergence in views around AI x-risk than Iâd expect, and this consensus does not seem to be present. Iâm not sure why this discrepancy exists and Iâm not sure how this could be fixedâmaybe staff at these orgs could publish their âcause rankingâ lists.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Philâs funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
This post is now three years old but is roughly what you suggest. For convenience I will copy one of the more relevant graphs into this comment:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Philâs funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.