Combining these facts[2], we can start with $6.13B(1.0662+0.0873)30=$0.0852B as an estimate of NVIDIA’s AI quarterly revenue 7 years ago adjusted for hardware efficiency, and sum over the next 30 quarters of growth (start of 2016 to Q2 2023) to get $39.7B[3]. NVIDIA is responsible for about 80% of the market share, at least according to most sources, so let’s just say that the true amount is $39.7B∗10.8=$49.6B. Call it $50B.
I’m wondering if it might be better taking the cut off as 2016, the start of the large-scale era[1]? I’m guessing that it might give similar results. Let’s see: from 2016 to 2022, NVidia revenue grows from $6.13B to $28.56B, a compound growth of 29.2% per year, or 6.62% per quarter. For FLOP/$, it seems more relevant to take “the trend implied by GPUs used in ML”, which “shows a doubling time of 2.07 years”. This translates into FLOP/$ increasing at 8.73% per quarter.
Combining these facts[2], we can start with $6.13B(1.0662+0.0873)30=$0.0852B as an estimate of NVIDIA’s AI quarterly revenue 7 years ago adjusted for hardware efficiency, and sum over the next 30 quarters of growth (start of 2016 to Q2 2023) to get $39.7B[3]. NVIDIA is responsible for about 80% of the market share, at least according to most sources, so let’s just say that the true amount is $39.7B∗10.8=$49.6B. Call it $50B.
This also because it seems unlikely that pre-2016 machines would be economical to run from an energy use perspective.
I’m copying Matthew’s final paragraph here, using my numbers.
Note that this is in line with my answer of $35.4B here for Matthew’s calc.