One outstanding question is at what point AI capabilities are too close to loss of control. We propose to delegate this question to the AI Safety Institutes set up in the U.K., U.S., China, and other countries.
I consider it clickbait if you write “There Is a Solution”, but then say that there are these AI safety institutes that will figure out the crucial details of the solution some time in the future.
To be fair, this is a linkpost, and the norm is often to use the heading from the piece wherever else it is published. Time magazine’s norms are probably a bit more click-bait pro. I’ll dm Otto though with some more Forum friendly titles.
I changed the title, the original one came from TIME. Still, we do believe there is a solution to existential risk. What we want to do is outlining the contours of such a solution. A lot has to be filled in by others, including the crucial question of when to pause. We acknowledge this in the piece.
I consider it clickbait if you write “There Is a Solution”, but then say that there are these AI safety institutes that will figure out the crucial details of the solution some time in the future.
To be fair, this is a linkpost, and the norm is often to use the heading from the piece wherever else it is published. Time magazine’s norms are probably a bit more click-bait pro. I’ll dm Otto though with some more Forum friendly titles.
Thanks for your comment.
I changed the title, the original one came from TIME. Still, we do believe there is a solution to existential risk. What we want to do is outlining the contours of such a solution. A lot has to be filled in by others, including the crucial question of when to pause. We acknowledge this in the piece.