Beyond capacity building; it’s not completely clear to me that there are robustly good interventions in AI safety, and I think more work is needed to prioritize interventions.
I think it’s pretty clear[1] that stopping further AI development (or Pausing) is a robustly good intervention in AI Safety (reducing AI x-risk).
I think it’s pretty clear[1] that stopping further AI development (or Pausing) is a robustly good intervention in AI Safety (reducing AI x-risk).
But see this post for some detailed reasoning.