I made a list of 10 ideas I’d be excited for someone to tackle, within the broad problem of “how to prioritize resources within AI X-risk?” I won’t claim these projects are more / less valuable than other things people could be doing. However, I’d be excited if someone gave a stab at them
10 Ideas:
Threat Model Prioritization
Country Prioritization
An Inside-view timelines model
Modelling AI Lab deployment decisions
How are resources currently allocated between risks / interventions / countries
How to allocate between “AI Safety” vs “AI Governance”
Ten Project Ideas for AI X-Risk Prioritization
I made a list of 10 ideas I’d be excited for someone to tackle, within the broad problem of “how to prioritize resources within AI X-risk?” I won’t claim these projects are more / less valuable than other things people could be doing. However, I’d be excited if someone gave a stab at them
10 Ideas:
Threat Model Prioritization
Country Prioritization
An Inside-view timelines model
Modelling AI Lab deployment decisions
How are resources currently allocated between risks / interventions / countries
How to allocate between “AI Safety” vs “AI Governance”
Timing of resources to reduce risks
Correlation between Safety and Capabilities
How should we reason about prioritization?
How should we model prioritization?
I wrote up a longer (but still scrappy) doc here