I haven’t considered all of the inputs to Cotra’s model, most notably the 2020 training computation requirements distribution. Without forming a view on that, I can’t really say that ~53% represents my overall view.
Sorry to bang on about this again and again, but it’s important to repeat for the benefit of those who don’t know: The training computation requirements distribution is by far the biggest cruxy input to the whole thing; it’s the input that matters most to the bottom line and is most subjective. If you hold fixed everything else Ajeya inputs, but change this distribution to something I think is reasonable, you get something like 2030 as the median (!!!) Meanwhile if you change the distribution to be even more extreme than Ajeya picked, you can push timelines arbitrarily far into the future.
Investigating this variable seems to have been beyond scope for the XPT forecasters, so this whole exercise is IMO merely that—a nice exercise, to practice for the real deal, which is when you think about the compute requirements distribution.
I agree that the training computation requirements distribution is more subjective and matters more to the eventual output.
I also want to note that while on your view of the compute reqs distribution, the hardware/spending/algorithmic progress inputs are a rounding error, this isn’t true for other views of the compute reqs distribution. E.g. for anyone who does agree with Ajeya on the compute reqs distribution, the XPT hardware/spending/algorithmic progress inputs shift median timelines from ~2050 to ~2090, which is quite consequential. (See here)
For someone like me, who hasn’t thought about the compute reqs distribution properly, I basically agree that this is just an exercise (and in isolation doesn’t show me much about what my timelines should be). But for those who have thought about it, the XPT inputs could either not matter at all (e.g. for you), or matter a lot (e.g. for someone with Ajeya’s compute reqs distribution).
It’s the crux between you and Ajeya, because you’re relatively more in agreement on the other numbers. But I think that adopting the xpt numbers on these other variables would slow down your own timelines notably, because of the almost complete lack of increase in spending.
That said, if the forecasters agreed with your compute requirements, they would probably also forecast higher spending.
The XPT forecasters are so in the dark about compute spending that I just pretend they gave more reasonable numbers. I’m honestly baffled how they could be so bad. The most aggressive of them thinks that in 2025 the most expensive training run will be $70M, and that it’ll take 6+ years to double thereafter, so that in 2032 we’ll have reached $140M training run spending… do these people have any idea how much GPT-4 cost in 2022?!?!? Did they not hear about the investments Microsoft has been making in OpenAI? And remember that’s what the most aggressive among them thought! The conservatives seem to be living in an alternate reality where GPT-3 proved that scaling doesn’t work and an AI winter set in in 2020.
Remember these predictions were made in summer 2022, before ChatGPT, before the big Microsoft investment and before any serious info about GPT-4. They’re still low, but not ridiculous.
And then GPT-3 happened, and was widely regarded to be a huge success and proof that scaling is a good idea etc.
So the amount of compute-spending that the most aggressive forecasters think could be spent on a single training run in 2032… is about 25% as much compute-spending as Microsoft gave OpenAI starting in 2019, before GPT-3 and before the scaling hypothesis. The most aggressive forecasters.
Sorry to bang on about this again and again, but it’s important to repeat for the benefit of those who don’t know: The training computation requirements distribution is by far the biggest cruxy input to the whole thing; it’s the input that matters most to the bottom line and is most subjective. If you hold fixed everything else Ajeya inputs, but change this distribution to something I think is reasonable, you get something like 2030 as the median (!!!) Meanwhile if you change the distribution to be even more extreme than Ajeya picked, you can push timelines arbitrarily far into the future.
Investigating this variable seems to have been beyond scope for the XPT forecasters, so this whole exercise is IMO merely that—a nice exercise, to practice for the real deal, which is when you think about the compute requirements distribution.
Don’t apologise, think it’s a helpful point!
I agree that the training computation requirements distribution is more subjective and matters more to the eventual output.
I also want to note that while on your view of the compute reqs distribution, the hardware/spending/algorithmic progress inputs are a rounding error, this isn’t true for other views of the compute reqs distribution. E.g. for anyone who does agree with Ajeya on the compute reqs distribution, the XPT hardware/spending/algorithmic progress inputs shift median timelines from ~2050 to ~2090, which is quite consequential. (See here)
For someone like me, who hasn’t thought about the compute reqs distribution properly, I basically agree that this is just an exercise (and in isolation doesn’t show me much about what my timelines should be). But for those who have thought about it, the XPT inputs could either not matter at all (e.g. for you), or matter a lot (e.g. for someone with Ajeya’s compute reqs distribution).
It’s the crux between you and Ajeya, because you’re relatively more in agreement on the other numbers. But I think that adopting the xpt numbers on these other variables would slow down your own timelines notably, because of the almost complete lack of increase in spending.
That said, if the forecasters agreed with your compute requirements, they would probably also forecast higher spending.
The XPT forecasters are so in the dark about compute spending that I just pretend they gave more reasonable numbers. I’m honestly baffled how they could be so bad. The most aggressive of them thinks that in 2025 the most expensive training run will be $70M, and that it’ll take 6+ years to double thereafter, so that in 2032 we’ll have reached $140M training run spending… do these people have any idea how much GPT-4 cost in 2022?!?!? Did they not hear about the investments Microsoft has been making in OpenAI? And remember that’s what the most aggressive among them thought! The conservatives seem to be living in an alternate reality where GPT-3 proved that scaling doesn’t work and an AI winter set in in 2020.
Perhaps this should be a top-level comment.
Remember these predictions were made in summer 2022, before ChatGPT, before the big Microsoft investment and before any serious info about GPT-4. They’re still low, but not ridiculous.
Fair, but still: In 2019 Microsoft invested a billion dollars in OpenAI, roughly half of which was compute: Microsoft invests billions more dollars in OpenAI, extends partnership | TechCrunch
And then GPT-3 happened, and was widely regarded to be a huge success and proof that scaling is a good idea etc.
So the amount of compute-spending that the most aggressive forecasters think could be spent on a single training run in 2032… is about 25% as much compute-spending as Microsoft gave OpenAI starting in 2019, before GPT-3 and before the scaling hypothesis. The most aggressive forecasters.
Do you have a write-up of your beliefs that lead you to 2030 as your median?
No, alas. However I do have this short summary doc I wrote back in 2021: The Master Argument for <10-year Timelines—Google Docs
And this sequence of posts making narrower points: AI Timelines—LessWrong
Also, if you do various searches on LW and Astral Codex Ten looking for comments I’ve made, you might see some useful ones maybe.