Thanks for the helpful addition. I’m not an expert in the x-risk funding landscape, so I’ll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It’s possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.
Thanks for the helpful addition. I’m not an expert in the x-risk funding landscape, so I’ll defer to you. Sounds like your suggestion could be a sensible one on cross-cause prio grounds. It’s possible that this dynamic illustrates a different pitfall of only making prio judgments at the level of big cause areas. If we lump AI in with other x-risks and hold cause-level funding steady, funding between AI and non-AI x-risks becomes zero sum.