I recently spent some time trying to work out what I think about AI timelines. I definitely don’t have any particular insight here; I just thought it was a useful exercise for me to go through for various reasons (and I did find it very useful!).
As it came out, I “estimated” a ~5% chance of TAI by 2030 and a ~20% chance of TAI by 2050 (the probabilities for AGI are slightly higher). As you’d expect me to say, these numbers are highly non-robust.
When I showed them the below plots a couple of people commented that they were surprised that my AGI probabilities are higher than my TAI ones, and I now think I didn’t think about non-AGI routes to TAI enough when I did this. I’d now probably increase the TAI probabilities a bit and lower the AGI ones a bit compared to what I’m showing here (by “a bit” I mean ~maybe a few percentage points).
I generated these numbers by forming an inside view, an outside view, and making some heuristic adjustments. The inside and outside views are ~weighted averages of various forecasts. My timelines are especially sensitive to how I chose and weighted forecasts for my outside view.
Here are my timelines in graphical form:
And here they are again alongside some other timelines people have made public:
If you want more detail, there’s a lot more in this google doc. I’ll probably write another shortform post with some more thoughts / reflections on the process later.
I recently spent some time trying to work out what I think about AI timelines. I definitely don’t have any particular insight here; I just thought it was a useful exercise for me to go through for various reasons (and I did find it very useful!).
As it came out, I “estimated” a ~5% chance of TAI by 2030 and a ~20% chance of TAI by 2050 (the probabilities for AGI are slightly higher). As you’d expect me to say, these numbers are highly non-robust.
When I showed them the below plots a couple of people commented that they were surprised that my AGI probabilities are higher than my TAI ones, and I now think I didn’t think about non-AGI routes to TAI enough when I did this. I’d now probably increase the TAI probabilities a bit and lower the AGI ones a bit compared to what I’m showing here (by “a bit” I mean ~maybe a few percentage points).
I generated these numbers by forming an inside view, an outside view, and making some heuristic adjustments. The inside and outside views are ~weighted averages of various forecasts. My timelines are especially sensitive to how I chose and weighted forecasts for my outside view.
Here are my timelines in graphical form:
And here they are again alongside some other timelines people have made public:
If you want more detail, there’s a lot more in this google doc. I’ll probably write another shortform post with some more thoughts / reflections on the process later.