n is counting the number of ML systems in the analysis at the point of writing. (We have added more systems in the meantime). An example for such a system is GPT-3, AlphaFold, etc. - basically a row in our dataset.
Right, good point. I’ll add the number of systems for the given time period.
That’s hard to answer. I don’t think OpenAI misinterpreted anything. For the moment, I think it’s probably a mixture of:
the inclusion criteria for the systems on which we base this trend
actual slower doubling times for reasons which we should figure out
Nonetheless, as outlined in Part 1 - Section 2.3, I did not interpret those trends yet but I’m interested in a discussion and trying to write up my thoughts on this in the future.
Thanks, Michael.
n
is counting the number of ML systems in the analysis at the point of writing. (We have added more systems in the meantime). An example for such a system is GPT-3, AlphaFold, etc. - basically a row in our dataset.Right, good point. I’ll add the number of systems for the given time period.
That’s hard to answer. I don’t think OpenAI misinterpreted anything. For the moment, I think it’s probably a mixture of:
the inclusion criteria for the systems on which we base this trend
actual slower doubling times for reasons which we should figure out Nonetheless, as outlined in Part 1 - Section 2.3, I did not interpret those trends yet but I’m interested in a discussion and trying to write up my thoughts on this in the future.