Slight update to the odds Iāve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. Iāve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. Thatās something I think is significantly more probable than AGI by the end of 2032. Previously, Iād been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight put the odds on Evan McMullin ā who was running as an independent, not for a third party, but close enough ā becoming president at 1 in 5,000 or 0.02%. Even these odds are quasi-arbitrary, since McMullin only became president in simulations where neither of the two major party candidates won a majority of Electoral College votes. In such scenarios, Nate Silver arbitrarily put the odds at 10% that the House would vote to appoint McMullin as the president.
So, for now, it is more accurate for me to say: the probability of the creation of AGI before the end of 2032 is significantly less than 1 in 5,000 or 0.02%.
I can also expand the window of time from the end of 2032 to the end of 2034. Thatās a small enough expansion it doesnāt affect the probability much. Extending the window to the end of 2034 covers the latest dates that have appeared on Metaculus since the big dip in its timeline that happened in the month following the launch of GPT-4. By the end of 2034, I still put the odds of AGI significantly below 1 in 5,000 or 0.02%.
My confidence interval is over 95%.[Edited Nov. 28, 2025 at 3:06pm Eastern. See comments below.]
I will continue to try to find other events to anchor my probability to. Itās difficult to find good examples. An imperfect point of comparison is an individualās annual risk of being struck by lightning, which is 1 in 1.22 million. Over 9 years, the risk is in 1 in 135,000. Since the creation of AGI within 9 years seems less likely to me than that Iāll be struck by lightning, I could also say the odds of AGIās creation within that timeframe is less than 1 in 135,000 or less than 0.0007%.
It seems like once you get significantly below 0.1%, though, it becomes hard to intuitively grasp the probability of events or find good examples to anchor off of.
I donāt think this should be downvoted. Itās a perfectly fine example of reasoning transparency. I happen to disagree, but the disagree-vote button is there for a reason.
Thank you. Karma downvotes have ceased to mean anything to me.
People downvote for no discernible reason, at least not reasons that are obvious to me, nor that they explain. Iām left to surmise what the reasons might be, including (in some cases) possibly disagreement, pique, or spite.
Neutrally informative things get downvoted, factual/āstraightforward logical corrections get downvoted, respectful expressions of mainstream expert opinion get downvoted ā everything, anything. The content is irrelevant and the tone/ādelivery is irrelevant. So, Iāve stopped interpreting downvotes as information.
I donāt think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding.
You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldnāt change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?
My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it.
I donāt think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didnāt change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability ā from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as Iām aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that itās impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we donāt have ā or at least I donāt think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so weāre back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I donāt know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesnāt work perfectly, either, because what if different people have different opinions/āintuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you donāt know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which canāt be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I donāt know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if theyād say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
Maybe this is a misapplication of the concept of confidence intervals ā math is not my strong suit, nor is forecasting, so let me know ā but what I had in mind is that Iām forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the ācorrectā probability range (whatever that ends up meaning).
But now that Iām thinking about it more and doing a cursory search, I think with a range of probabilities for a given date (e.g. 0.00% to 0.02% by end of 2034) as opposed to a range of years (e.g. 5 to 20 years) or another definite quantity, the probability itself is supposed to represent all the uncertainty and the confidence interval is redundant.
Iām forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the ācorrectā probability range
I kinda get what youāre saying but I think this is double-counting in a weird way. A 0.01% probability means that if you make 10,000 predictions of that kind, then about one of them should come true. So your 95% confidence interval sounds like something like ā20 times, I make 10,000 predictions that each have a probability between 0.00% and 0.02%; and 19 out of 20 times, about one out of the 10,000 predictions comes true.ā
You could reduce this to a single point probability. The math is a bit complicated but I think youād end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you arenāt actually claiming to have a 0.001% credence.
I think there are other meaningful statements you could make. You could say something like, āIām 95% confident that if I spend 10x longer studying this question, then I would end up with a probability between 0.00% and 0.02%.ā
You could reduce this to a single point probability. The math is a bit complicated but I think youād end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you arenāt actually claiming to have a 0.001% credence.
Yeah, Iām saying the probability is significantly less than 0.02% without saying exactly how much less ā thatās much harder to pin down, and there are diminishing returns to exactitude here ā so that means itās a range from 0.00% to <0.02%. Or just <0.02%.
The simplest solution, and the correct/āgenerally recommended solution, seems to be to simply express the probability, unqualified.
Slight update to the odds Iāve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. Iāve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. Thatās something I think is significantly more probable than AGI by the end of 2032. Previously, Iād been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight put the odds on Evan McMullin ā who was running as an independent, not for a third party, but close enough ā becoming president at 1 in 5,000 or 0.02%. Even these odds are quasi-arbitrary, since McMullin only became president in simulations where neither of the two major party candidates won a majority of Electoral College votes. In such scenarios, Nate Silver arbitrarily put the odds at 10% that the House would vote to appoint McMullin as the president.
So, for now, it is more accurate for me to say: the probability of the creation of AGI before the end of 2032 is significantly less than 1 in 5,000 or 0.02%.
I can also expand the window of time from the end of 2032 to the end of 2034. Thatās a small enough expansion it doesnāt affect the probability much. Extending the window to the end of 2034 covers the latest dates that have appeared on Metaculus since the big dip in its timeline that happened in the month following the launch of GPT-4. By the end of 2034, I still put the odds of AGI significantly below 1 in 5,000 or 0.02%.
My confidence interval is over 95%.[Edited Nov. 28, 2025 at 3:06pm Eastern. See comments below.]I will continue to try to find other events to anchor my probability to. Itās difficult to find good examples. An imperfect point of comparison is an individualās annual risk of being struck by lightning, which is 1 in 1.22 million. Over 9 years, the risk is in 1 in 135,000. Since the creation of AGI within 9 years seems less likely to me than that Iāll be struck by lightning, I could also say the odds of AGIās creation within that timeframe is less than 1 in 135,000 or less than 0.0007%.
It seems like once you get significantly below 0.1%, though, it becomes hard to intuitively grasp the probability of events or find good examples to anchor off of.
I donāt think this should be downvoted. Itās a perfectly fine example of reasoning transparency. I happen to disagree, but the disagree-vote button is there for a reason.
Thank you. Karma downvotes have ceased to mean anything to me.
People downvote for no discernible reason, at least not reasons that are obvious to me, nor that they explain. Iām left to surmise what the reasons might be, including (in some cases) possibly disagreement, pique, or spite.
Neutrally informative things get downvoted, factual/āstraightforward logical corrections get downvoted, respectful expressions of mainstream expert opinion get downvoted ā everything, anything. The content is irrelevant and the tone/ādelivery is irrelevant. So, Iāve stopped interpreting downvotes as information.
I donāt think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding.
You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldnāt change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?
My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it.
I donāt think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didnāt change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability ā from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as Iām aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that itās impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we donāt have ā or at least I donāt think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so weāre back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I donāt know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesnāt work perfectly, either, because what if different people have different opinions/āintuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you donāt know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which canāt be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I donāt know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if theyād say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
What do you mean by this? What is it that youāre 95% confident about?
Maybe this is a misapplication of the concept of confidence intervals ā math is not my strong suit, nor is forecasting, so let me know ā but what I had in mind is that Iām forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the ācorrectā probability range (whatever that ends up meaning).
But now that Iām thinking about it more and doing a cursory search, I think with a range of probabilities for a given date (e.g. 0.00% to 0.02% by end of 2034) as opposed to a range of years (e.g. 5 to 20 years) or another definite quantity, the probability itself is supposed to represent all the uncertainty and the confidence interval is redundant.
As you can tell, Iām not a forecaster.
I kinda get what youāre saying but I think this is double-counting in a weird way. A 0.01% probability means that if you make 10,000 predictions of that kind, then about one of them should come true. So your 95% confidence interval sounds like something like ā20 times, I make 10,000 predictions that each have a probability between 0.00% and 0.02%; and 19 out of 20 times, about one out of the 10,000 predictions comes true.ā
You could reduce this to a single point probability. The math is a bit complicated but I think youād end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you arenāt actually claiming to have a 0.001% credence.
I think there are other meaningful statements you could make. You could say something like, āIām 95% confident that if I spend 10x longer studying this question, then I would end up with a probability between 0.00% and 0.02%.ā
Yeah, Iām saying the probability is significantly less than 0.02% without saying exactly how much less ā thatās much harder to pin down, and there are diminishing returns to exactitude here ā so that means itās a range from 0.00% to <0.02%. Or just <0.02%.
The simplest solution, and the correct/āgenerally recommended solution, seems to be to simply express the probability, unqualified.