This is an article I wrote last year about my mental model of the risks of AGI, written for a general audience (with various tweaks afterward). At the time, it had nearly zero readers. Recently I was a bit miffed about accelerationist David Shapiro deleting my entire conversation with him on Substack (in which he mostly ignored me anyway) so I thought “why not try a friendlier audience?” Let me know if you think I did a good job (or not).
Synopsis: for reasons laid out in the article, I think most of the risk comes from the possibility of a low compute-to-intelligence ratio, and I doubt that the first AGI will be one we should be most worried about. Instead, the problem is that the first one leads to others, and to competition that leads to create more and varied designs. I don’t imagine my perspective is novel; the point is to just to explain it in a way that is engaging, accessible and logical.
GPT5 won’t be what kills us all
Link post
This is an article I wrote last year about my mental model of the risks of AGI, written for a general audience (with various tweaks afterward). At the time, it had nearly zero readers. Recently I was a bit miffed about accelerationist David Shapiro deleting my entire conversation with him on Substack (in which he mostly ignored me anyway) so I thought “why not try a friendlier audience?” Let me know if you think I did a good job (or not).
Synopsis: for reasons laid out in the article, I think most of the risk comes from the possibility of a low compute-to-intelligence ratio, and I doubt that the first AGI will be one we should be most worried about. Instead, the problem is that the first one leads to others, and to competition that leads to create more and varied designs. I don’t imagine my perspective is novel; the point is to just to explain it in a way that is engaging, accessible and logical.