Thanks for your comment Jack, that’s a really great point. I suppose that we would seek to influence AI slightly differently for each reason:
Reduce chance of unaligned/uncontrolled AI
Increase chance of useful AI
Increase chance of exactly aligned AI
e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.
Thanks for sharing this paper, I had not heard of it before and it sounds really interesting.