I agree that the training computation requirements distribution is more subjective and matters more to the eventual output.
I also want to note that while on your view of the compute reqs distribution, the hardware/spending/algorithmic progress inputs are a rounding error, this isn’t true for other views of the compute reqs distribution. E.g. for anyone who does agree with Ajeya on the compute reqs distribution, the XPT hardware/spending/algorithmic progress inputs shift median timelines from ~2050 to ~2090, which is quite consequential. (See here)
For someone like me, who hasn’t thought about the compute reqs distribution properly, I basically agree that this is just an exercise (and in isolation doesn’t show me much about what my timelines should be). But for those who have thought about it, the XPT inputs could either not matter at all (e.g. for you), or matter a lot (e.g. for someone with Ajeya’s compute reqs distribution).
Don’t apologise, think it’s a helpful point!
I agree that the training computation requirements distribution is more subjective and matters more to the eventual output.
I also want to note that while on your view of the compute reqs distribution, the hardware/spending/algorithmic progress inputs are a rounding error, this isn’t true for other views of the compute reqs distribution. E.g. for anyone who does agree with Ajeya on the compute reqs distribution, the XPT hardware/spending/algorithmic progress inputs shift median timelines from ~2050 to ~2090, which is quite consequential. (See here)
For someone like me, who hasn’t thought about the compute reqs distribution properly, I basically agree that this is just an exercise (and in isolation doesn’t show me much about what my timelines should be). But for those who have thought about it, the XPT inputs could either not matter at all (e.g. for you), or matter a lot (e.g. for someone with Ajeya’s compute reqs distribution).