Thanks for your comment Jack, that’s a really great point. I suppose that we would seek to influence AI slightly differently for each reason:
Reduce chance of unaligned/uncontrolled AI
Increase chance of useful AI
Increase chance of exactly aligned AI
e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.
Thanks for your comment Jack, that’s a really great point. I suppose that we would seek to influence AI slightly differently for each reason:
Reduce chance of unaligned/uncontrolled AI
Increase chance of useful AI
Increase chance of exactly aligned AI
e.g. you could reduce the chance of AI risk by stopping all AI development but then lose the other two benefits, or you could create a practically useful AI but not one that would guide humanity towards an optimal future. That being said I reckon in practice a lot of work to improve the development of AI would hit all 3. Though maybe if you view one reason as much more important than the others then you focus on a specific type of AI work.
That makes sense. I’m no expert in AI but I would think:
Stopping AI development just isn’t going to happen.
Making useful AI isn’t very neglected and progress here has certainly been quite impressive, so I’m optimistic for superintelligence at some point.
There probably isn’t much (if any) difference between the work that is required to make aligned AI and to make maximally-aligned AI.
Would be interesting to know if anyone thinks I’m wrong on any one of these points.