>It’s plausible humans will go extinct from AI. It’s also plausible humans will go extinct from supervolcanoes.
Our primitive and nontechnological ancestors survived tens of millions of years of supervolcano eruptions (not to mention mass extinctions from asteroid/comet impacts) and our civilization’s ability to withstand them is unprecedentedly high and rapidly increasing. That’s not plausible, it’s enormously remote, well under 1⁄10,000 this century.
There’s also an issue of “low probability” meaning fundamentally different things in the case of AI doom vs supervolcanoes.
P(supervolacano doom) > 0 is a frequentist statement. “We know from past observations that supervolcano doom happens with some (low) frequency.” This is a fact about the territory.
P(AI doom) > 0 is a Bayesian statement. “Given our current state of knowledge, it’s possible we live in a world where AI doom happens.” This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we’re just confused and there’s no way this could ever actually happen.
Supervolcano doom probabilities are more resilient because the chain of causation is shorter and we have a natural history track record to back up some of the key points in the chain. But the difference are a matter of degree, not kind. It is very much not the case that we’ve had a long-term track record of human civilizations that died to supervolcanoes to draw from; almost every claim about the probability of human extinction is ultimately a claim about (hopefully improving) models, not a sample of long-run means.
>It’s plausible humans will go extinct from AI. It’s also plausible humans will go extinct from supervolcanoes.
Our primitive and nontechnological ancestors survived tens of millions of years of supervolcano eruptions (not to mention mass extinctions from asteroid/comet impacts) and our civilization’s ability to withstand them is unprecedentedly high and rapidly increasing. That’s not plausible, it’s enormously remote, well under 1⁄10,000 this century.
I agree with what I think you intend to say, but in my mind plausible = any chance at all.
This is what I meant, yeah.
There’s also an issue of “low probability” meaning fundamentally different things in the case of AI doom vs supervolcanoes.
P(supervolacano doom) > 0 is a frequentist statement. “We know from past observations that supervolcano doom happens with some (low) frequency.” This is a fact about the territory.
P(AI doom) > 0 is a Bayesian statement. “Given our current state of knowledge, it’s possible we live in a world where AI doom happens.” This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we’re just confused and there’s no way this could ever actually happen.
What.
Supervolcano doom probabilities are more resilient because the chain of causation is shorter and we have a natural history track record to back up some of the key points in the chain. But the difference are a matter of degree, not kind. It is very much not the case that we’ve had a long-term track record of human civilizations that died to supervolcanoes to draw from; almost every claim about the probability of human extinction is ultimately a claim about (hopefully improving) models, not a sample of long-run means.
could be could be