For the different risks from AI, how might we solve each of them? What are the challenges to implementing those solutions? I.e. when is the problem engineering, incentives, etc?
There are many approaches, but the challenge imo is making any of them 100% water-tight, and we are very far from that with no complete roadmap in sight. 99% isn’t going to cut it when the AGI is far smarter than us and one misaligned execution of an instruction is enough to doom us all.
For the different risks from AI, how might we solve each of them? What are the challenges to implementing those solutions? I.e. when is the problem engineering, incentives, etc?
There are many approaches, but the challenge imo is making any of them 100% water-tight, and we are very far from that with no complete roadmap in sight. 99% isn’t going to cut it when the AGI is far smarter than us and one misaligned execution of an instruction is enough to doom us all.