This is going to be a quick review since I there has been plenty of discussion of this post and people understand it well. But this post was very influential for me personally, and helped communicate yet another aspect of the key problem with AI risk—the fact that its so unprecedented, which makes it hard to test and iterate solutions, hard to raise awareness and get agreement about the nature of the problem, and hard to know how much time we have left to prepare.
AI is simply one of the biggest worries among longtermist EAs, and this essay does a good job describing a social dynamic unique to the space of AI risk that makes dealing with the risk harder. For this reason it would be a fine inclusion in the decadal review.
This is going to be a quick review since I there has been plenty of discussion of this post and people understand it well. But this post was very influential for me personally, and helped communicate yet another aspect of the key problem with AI risk—the fact that its so unprecedented, which makes it hard to test and iterate solutions, hard to raise awareness and get agreement about the nature of the problem, and hard to know how much time we have left to prepare.
AI is simply one of the biggest worries among longtermist EAs, and this essay does a good job describing a social dynamic unique to the space of AI risk that makes dealing with the risk harder. For this reason it would be a fine inclusion in the decadal review.