I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and Iâm often a bit perplexed as to how quick people are to jump from ânearly everyone diesâ to âliterally everyone diesâ. Similarly Iâm sympathetic to the point that itâs difficult to imagine particularly compelling scenarios where AI doesnât radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didnât predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldnât have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.
I think the meta-point might be the crux of our disagreement.
I mostly agree with your inside view that other catastrophic risks struggle to be existential the way AI would, and Iâm often a bit perplexed as to how quick people are to jump from ânearly everyone diesâ to âliterally everyone diesâ. Similarly Iâm sympathetic to the point that itâs difficult to imagine particularly compelling scenarios where AI doesnât radically alter the world in some way.
But we should be immensely uncertain about the assumptions we make and I would argue the far most likely first-order dominant of future value is something our inside-view models didnât predict. My issue is not with your reasoning, but how much trust to place in our models in general. My critique is absolutely not that you shouldnât have an inside view, but that a well-developed inside view is one of many tools we use to gather evidence. Over reliance on a single type of evidence leads to worse decision making.