That makes sense. I’m no expert in AI but I would think:
Stopping AI development just isn’t going to happen.
Making useful AI isn’t very neglected and progress here has certainly been quite impressive, so I’m optimistic for superintelligence at some point.
There probably isn’t much (if any) difference between the work that is required to make aligned AI and to make maximally-aligned AI.
Would be interesting to know if anyone thinks I’m wrong on any one of these points.
That makes sense. I’m no expert in AI but I would think:
Stopping AI development just isn’t going to happen.
Making useful AI isn’t very neglected and progress here has certainly been quite impressive, so I’m optimistic for superintelligence at some point.
There probably isn’t much (if any) difference between the work that is required to make aligned AI and to make maximally-aligned AI.
Would be interesting to know if anyone thinks I’m wrong on any one of these points.