Matthew Barnett’s compute-based framework for thinking about the future of AI corroborates my view that data is not likely to be a bottleneck. Also, contrary to the section “against very short timelines”, I argue that in fact, the data/framework used is enough to make one even more worried than I am in the OP. 1 OOM FLOP more than I previously said (“100x the compute used for GPT-4″) is likely available basically now to some actors; or 4 OOM including algorithmic improvements that come “for free” with compute scaling! This (10^28-10^31 FLOP) means AGI is possible this year.
Matthew Barnett’s compute-based framework for thinking about the future of AI corroborates my view that data is not likely to be a bottleneck. Also, contrary to the section “against very short timelines”, I argue that in fact, the data/framework used is enough to make one even more worried than I am in the OP. 1 OOM FLOP more than I previously said (“100x the compute used for GPT-4″) is likely available basically now to some actors; or 4 OOM including algorithmic improvements that come “for free” with compute scaling! This (10^28-10^31 FLOP) means AGI is possible this year.