1. Implementation-focused policy advocacy and communication are neglected
Both AI-safety technical research and policy research only reduce risk when governments and AI companies actually adopt their recommendations. Adoption hinges on clear, persuasive communication and sustained advocacy that turn safety ideas into things like tangibly implementable legal clauses and enforcement mechanisms. Ultimately, that is the job of implementation-focused work including communications: turning promising ideas into action.
Yet funding, talent, and attention remain skewed toward technical research and policy research; quick estimates suggest safety implementation funding is less than 10% of that of all research.
This has appeared to me as a serious bottleneck in the AI safety space for a while now. Does anyone know why this kind of work is rarely funded?
This has appeared to me as a serious bottleneck in the AI safety space for a while now. Does anyone know why this kind of work is rarely funded?