And then GPT-3 happened, and was widely regarded to be a huge success and proof that scaling is a good idea etc.
So the amount of compute-spending that the most aggressive forecasters think could be spent on a single training run in 2032… is about 25% as much compute-spending as Microsoft gave OpenAI starting in 2019, before GPT-3 and before the scaling hypothesis. The most aggressive forecasters.
Fair, but still: In 2019 Microsoft invested a billion dollars in OpenAI, roughly half of which was compute: Microsoft invests billions more dollars in OpenAI, extends partnership | TechCrunch
And then GPT-3 happened, and was widely regarded to be a huge success and proof that scaling is a good idea etc.
So the amount of compute-spending that the most aggressive forecasters think could be spent on a single training run in 2032… is about 25% as much compute-spending as Microsoft gave OpenAI starting in 2019, before GPT-3 and before the scaling hypothesis. The most aggressive forecasters.