For comparison, takeoffspeeds.com has an aggressive monte-carlo (with a median of 10^31 training FLOP) that yields a median of 2033.7 for 100% automation — and a p(TAI < 2030) of ~28%. That 28% is pretty radically different from your 2%. Do you know your biggest disagreements with that model?
I think my biggest disagreement with the takeoff speeds model is just that it’s conditional on things like: no coordinated delays, regulation, or exogenous events like war, and doesn’t take into account model uncertainty. My other big argument here is that I just think robots aren’t very impressive right now, and it’s hard to see them going from being unimpressive to extremely impressive in just a few short years. 2030 is very soon. Imagining a even a ~4 year delay due to all of these factors produces a very different distribution.
Also, as you note, “takeoffspeeds.com talks about “AGI” and you talk about “TAI”. I think transformative AI is a lower bar than 100% automation. The model itself says they added “an extra OOM to account for TAI being a lower bar than full automation (AGI).” Notably, if you put in 10^33 2022 FLOP into the takeoff model (and keep in mind that I was talking about 2023 FLOP), it produces a median year of >30% GWP growth of about 2032, which isn’t too far from what I said in the post:
Assuming no substantial delays or large disasters such as war in the meantime, I believe that TAI will probably arrive within about 15 years
I added about four years to this 2032 timeline due to robots, which I think is reasonable even given your considerations about how we don’t have to automate everything—we just need to automate the bottlenecks to producing more semiconductor fabs. But you could be right that I’m still being too conservative.
I think my biggest disagreement with the takeoff speeds model is just that it’s conditional on things like: no coordinated delays, regulation, or exogenous events like war, and doesn’t take into account model uncertainty.
Cool, I thought that was most of the explanation for the difference in the median. But I thought it shouldn’t be enough to explain the 14x difference between 28% and 2% by 2030, because I think there should be a ≥20% chance that there are no significant coordinated delays, regulation, or relevant exogenous events if AI goes wild in the next 7 years. (And that model uncertainty should work to increase rather than decrease the probability, here.)
If you think robotics would definitely be necessary, then I can see how that would be significant.
But I think it’s possible that we get a software-only singularity. Or more broadly, simultaneously having (i) AI improving algorithms (...improving AIs), (ii) a large fraction of the world’s fab-capacity redirected to AI chips, and (iii) AIs helping with late-stage hardware stuff like chip-design. (I agree that it takes a long time to build new fabs.) This would simultaneously explain why robotics aren’t necessary (before we have crazy good AI) and decrease the probability of regulatory delays, since the AIs would just need to be deployed inside a few companies. (I can see how regulation would by-default slow down some kinds of broad deployment, but it seems super unclear whether there will be regulation put in place to slow down R&D and internal deployment.)
Update: I changed the probability distribution in the post slightly in line with your criticism. The new distribution is almost exactly the same, except that I think it portrays a more realistic picture of short timelines. The p(TAI < 2030) is now 5% [eta: now 18%], rather than 2%.
Cool, I thought that was most of the explanation for the difference in the median. But I thought it shouldn’t be enough to explain the 14x difference between 28% and 2% by 2030
That’s reasonable. I think I probably should have put more like 3-6% credence before 2030. I should note that it’s a bit difficult to tune the Metaculus distributions to produce exactly what you want, and the distribution shouldn’t be seen as an exact representation of my beliefs.
Sorry, that was very poor wording. I meant that 2023 FLOP is probably about equal to 2 2022 FLOP, due to continued algorithmic progress. I’ll reword the comment you replied to.
Incidentally, as its central estimate for algorithmic improvement, the takeoff speeds model uses AI and Efficiency’s ~1.7x per year, and then halves it to ~1.3x per year (because todays’ algorithmic progress might not generalize to TAI). If you’re at 2x per year, then you should maybe increase the “returns to software” from 1.25 to ~3.5, which would cut the model’s timelines by something like 3 years. (More on longer timelines, less on shorter timelines.)
I think my biggest disagreement with the takeoff speeds model is just that it’s conditional on things like: no coordinated delays, regulation, or exogenous events like war, and doesn’t take into account model uncertainty. My other big argument here is that I just think robots aren’t very impressive right now, and it’s hard to see them going from being unimpressive to extremely impressive in just a few short years. 2030 is very soon. Imagining a even a ~4 year delay due to all of these factors produces a very different distribution.
Also, as you note, “takeoffspeeds.com talks about “AGI” and you talk about “TAI”. I think transformative AI is a lower bar than 100% automation. The model itself says they added “an extra OOM to account for TAI being a lower bar than full automation (AGI).” Notably, if you put in 10^33 2022 FLOP into the takeoff model (and keep in mind that I was talking about 2023 FLOP), it produces a median year of >30% GWP growth of about 2032, which isn’t too far from what I said in the post:
I added about four years to this 2032 timeline due to robots, which I think is reasonable even given your considerations about how we don’t have to automate everything—we just need to automate the bottlenecks to producing more semiconductor fabs. But you could be right that I’m still being too conservative.
Cool, I thought that was most of the explanation for the difference in the median. But I thought it shouldn’t be enough to explain the 14x difference between 28% and 2% by 2030, because I think there should be a ≥20% chance that there are no significant coordinated delays, regulation, or relevant exogenous events if AI goes wild in the next 7 years. (And that model uncertainty should work to increase rather than decrease the probability, here.)
If you think robotics would definitely be necessary, then I can see how that would be significant.
But I think it’s possible that we get a software-only singularity. Or more broadly, simultaneously having (i) AI improving algorithms (...improving AIs), (ii) a large fraction of the world’s fab-capacity redirected to AI chips, and (iii) AIs helping with late-stage hardware stuff like chip-design. (I agree that it takes a long time to build new fabs.) This would simultaneously explain why robotics aren’t necessary (before we have crazy good AI) and decrease the probability of regulatory delays, since the AIs would just need to be deployed inside a few companies. (I can see how regulation would by-default slow down some kinds of broad deployment, but it seems super unclear whether there will be regulation put in place to slow down R&D and internal deployment.)
Update: I changed the probability distribution in the post slightly in line with your criticism. The new distribution is almost exactly the same, except that I think it portrays a more realistic picture of short timelines. The p(TAI < 2030) is now 5% [eta: now 18%], rather than 2%.
That’s reasonable. I think I probably should have put more like 3-6% credence before 2030. I should note that it’s a bit difficult to tune the Metaculus distributions to produce exactly what you want, and the distribution shouldn’t be seen as an exact representation of my beliefs.
I don’t understand this. Why would there be a 2x speedup in algorithmic progress?
Sorry, that was very poor wording. I meant that 2023 FLOP is probably about equal to 2 2022 FLOP, due to continued algorithmic progress. I’ll reword the comment you replied to.
Nice, gotcha.
Incidentally, as its central estimate for algorithmic improvement, the takeoff speeds model uses AI and Efficiency’s ~1.7x per year, and then halves it to ~1.3x per year (because todays’ algorithmic progress might not generalize to TAI). If you’re at 2x per year, then you should maybe increase the “returns to software” from 1.25 to ~3.5, which would cut the model’s timelines by something like 3 years. (More on longer timelines, less on shorter timelines.)