There’s probably people who can answer better, but my crack at it: (from most to least important)
1. If people who care about AI safety also happen to be the best at making AI, then they’ll try to align the AI they make. (This is already turning out to be a pretty successful strategy: OpenAI is an industry leader that cares a lot about risks.)
2. If somebody figures out how to align AI, other people can use their methods. They’d probably want to, if they buy that misaligned AI is dangerous to them, but this could fail if aligned methods are less powerful or more difficult than not-necessarily-aligned methods.
3. Credibility and public platform: People listen to Paul Christiano because he’s a serious AI researcher. He can convince important people to care about AI risk.
There’s probably people who can answer better, but my crack at it: (from most to least important)
1. If people who care about AI safety also happen to be the best at making AI, then they’ll try to align the AI they make. (This is already turning out to be a pretty successful strategy: OpenAI is an industry leader that cares a lot about risks.)
2. If somebody figures out how to align AI, other people can use their methods. They’d probably want to, if they buy that misaligned AI is dangerous to them, but this could fail if aligned methods are less powerful or more difficult than not-necessarily-aligned methods.
3. Credibility and public platform: People listen to Paul Christiano because he’s a serious AI researcher. He can convince important people to care about AI risk.