I don’t think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didn’t change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability — from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as I’m aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that it’s impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we don’t have — or at least I don’t think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so we’re back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I don’t know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesn’t work perfectly, either, because what if different people have different opinions/​intuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you don’t know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which can’t be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I don’t know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if they’d say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
I don’t think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didn’t change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability — from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as I’m aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that it’s impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we don’t have — or at least I don’t think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so we’re back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I don’t know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesn’t work perfectly, either, because what if different people have different opinions/​intuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you don’t know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which can’t be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I don’t know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if they’d say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.