Nice write-up on the issue.
One thing I will say is that I’m maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn’t go through, even in scenarios where humanity is basically as careless as possible.
This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism matters much more than their selfish desires on stellar/galaxy wide scales, combined with me being a relatively big believer in quite a few risks like suffering risks being very cheap to solve via moral trade where most humans are apathetic on.
More generally, I’ve become mostly convinced of the idea that a crucial positive consideration on any post-AGI/ASI future is that it’s really, really easy to prevent most of the worst things that can happen in those futures under a broad array of values, even if moral objectivism/moral realism is false and there isn’t much convergence on values amongst the broad population.
For what it’s worth, I think pre-training alone is probably enough to get us to about 1-3 month time horizons based on a 7 month doubling time, but pre-training data will start to run out in the early 2030s, meaning that you no longer (in the absence of other benchmarks) have very good general proxies of capabilities improvements.
The real issue isn’t the difference between hours and months long tasks, but the difference between months long tasks and century long tasks, which Steve Newman describes well here.