Note: I’ve edited this post to change my bottom-line TAI arrival distribution slightly. The edit doesn’t reflect much of a change in my (underlying) transformative AI timelines, but rather (mostly) reflects a better compromise when visualizing things.
To make a long story short, previously I put too little probability on TAI arriving between 2027-2035 because I wanted the plot to put very low probability on TAI arriving before 2027. Because of the way the Metaculus sliders work, this made it difficult to convey a very rapid increase in my probability after 2026. Now I’ve decided to compromise in a way that put what I think is an unrealistically high probability on TAI arriving before 2027.
That said, I have updated a little bit since I wrote this post:
I’m a little more skeptical of TAI happening at all in the 21st century, mostly as a result of reflecting on arguments in this paper from Ege Erdil and Tamay Besiroglu.
I’m a little more bullish on the possibility of a rapid scale-up of hardware in the mid-to-late 2020s, delivering a 10^28 FLOP training run before 2026, and/or a 10^30 FLOP training run before 2030. This update came after I read more about the existing capacity of semiconductor fabs.
I’ll try not to change the post much going forward in the future, so that it can reflect a historical snapshot of how I thought about AI timelines in 2023, rather than a frequently updated document.
Note: I’ve edited this post to change my bottom-line TAI arrival distribution slightly. The edit doesn’t reflect much of a change in my (underlying) transformative AI timelines, but rather (mostly) reflects a better compromise when visualizing things.
To make a long story short, previously I put too little probability on TAI arriving between 2027-2035 because I wanted the plot to put very low probability on TAI arriving before 2027. Because of the way the Metaculus sliders work, this made it difficult to convey a very rapid increase in my probability after 2026. Now I’ve decided to compromise in a way that put what I think is an unrealistically high probability on TAI arriving before 2027.
That said, I have updated a little bit since I wrote this post:
I’m a little more skeptical of TAI happening at all in the 21st century, mostly as a result of reflecting on arguments in this paper from Ege Erdil and Tamay Besiroglu.
I’m a little more bullish on the possibility of a rapid scale-up of hardware in the mid-to-late 2020s, delivering a 10^28 FLOP training run before 2026, and/or a 10^30 FLOP training run before 2030. This update came after I read more about the existing capacity of semiconductor fabs.
I’ll try not to change the post much going forward in the future, so that it can reflect a historical snapshot of how I thought about AI timelines in 2023, rather than a frequently updated document.