A more recent Yudkowsky piece that really stuck with me: “There’s No Fire Alarm for Artificial General Intelligence”. It’s dense and difficult to summarize, but in general, it discusses different heuristics people use to predict future risk, and why they could all fail in the case of AGI.
A more recent Yudkowsky piece that really stuck with me: “There’s No Fire Alarm for Artificial General Intelligence”. It’s dense and difficult to summarize, but in general, it discusses different heuristics people use to predict future risk, and why they could all fail in the case of AGI.