Oh, apologies: I’m not actually trying to claim that things will be <<exactly.. exponential>>. We should expect some amount of ~variation in progress/growth (these are rough models, we shouldn’t be too confident about how things will go, etc.), what’s actually going on is (probably a lot) more complicated than a simple/neat progression of new s-curves, etc.
The thing I’m trying to say is more like:
When we’ve observed some datapoints about a thing we care about, and they seem to fit some overall curve (e.g. exponential growth) reasonably well,
then pointing to specific drivers that we think are responsible for the changes — & focusing on how those drivers might progress or be fundamentally limited, etc. — often makes us (significantly) overestimate bottlenecks/obstacles standing in the way of progress on the thing that we actually care about.
And placing some weight on the prediction that the curve will simply continue[1] seems like a useful heuristic / counterbalance (and has performed well).
(Apologies if what I’d written earlier was unclear about what I believe — I’m not sure if we still notably disagree given the clarification?)
A different way to think about this might be something like:
The drivers that we can point to are generally only part of the picture, and they’re often downstream of some fuzzier higher-level/”meta” force (or a portfolio of forces) like “incentives+...”
It’s usually quite hard to draw a boundary around literally everything that’s causing some growth/progress
It’s also often hard to imagine, from a given point in time, very different ways of driving the thing forward
(e.g. because we’ve yet to discover other ways of making progress, because proxies we’re looking at locally implicitly bake in some unnecessary assumptions about how progress on the thing we care about will get made, etc.)
So our stories about what’s causing some development that we’re observing are often missing important stuff, and sometimes we should trust the extrapolation more than the stories / assume the stories are incomplete
Something like this seems to help explain why views like “the curve we’re observing will (basically) just continue” have seemed surprisingly successful, even when the people holding those “curve go up” views justified their conclusions via apparently incorrect reasoning about the specific drivers of progress. (And so IMO people should place non-trivial weight on stuff like “rough, somewhat naive-seeming extrapolation of the general trends we’re observing[2].”[3])
Caveat: I’d add ”...on a big range/ the scale we care about”; at some point, ~any progress would start hitting ~physical limits. But if that point is after the curve reshapes ~everything we care about, then I’m basically ignoring that consideration for now.
- the metrics we use for such observations can lead us astray in some situations (in particular they might not ~linearly relate to “the true thing we care about”)
- we often have limited data, we shouldn’t be confident that we’re predicting/measuring the right thing, things can in fact change over time and we should also not forget that, etc.
And placing some weight on the prediction that the curve will simply continue[1] seems like a useful heuristic / counterbalance (and has performed well).
“and has performed well” seems like a good crux to zoom in on; for which reference class of empirical trends is this true, and how true is it?
It’s hard to disagree with “place some weight”; imo it always makes sense to have some prior that past trends will continue. The question is how much weight to place on this heuristic vs. more gears-level reasoning.
For a random example, observers in 2009 might have mispredicted Spanish GDP over the next ten years if they placed a lot of weight on this prior.
So ~everything is ultimately an S-curve. Yet although ‘this trend will start capping out somewhere’ is a very safe bet, ‘calling the inflection point’ before you’ve passed it is known to be extremely hard. Sigmoid curves in their early days are essentially indistinguishable from exponential ones, and the extra parameter which ~guarantees they can better (over?)fit the points on the graph than a simple exponential give very unstable estimates of the putative ceiling the trend will ‘cap out’ at. (cf. 1, 2.)
Many important things turn on (e.g.) ‘scaling is hitting the wall ~now’ vs. ‘scaling will hit the wall roughly at the point of the first dyson sphere data center’ As the universe is a small place on a log scale, this range is easily spanned by different analysis choices on how you project forward.
Without strong priors on ‘inflecting soon’ vs. ‘inflecting late’, forecasts tend to be volatile: is this small blip above or below trend really a blip, or a sign we’re entering a faster/slow regime?
Oh, apologies: I’m not actually trying to claim that things will be <<exactly.. exponential>>. We should expect some amount of ~variation in progress/growth (these are rough models, we shouldn’t be too confident about how things will go, etc.), what’s actually going on is (probably a lot) more complicated than a simple/neat progression of new s-curves, etc.
The thing I’m trying to say is more like:
When we’ve observed some datapoints about a thing we care about, and they seem to fit some overall curve (e.g. exponential growth) reasonably well,
then pointing to specific drivers that we think are responsible for the changes — & focusing on how those drivers might progress or be fundamentally limited, etc. — often makes us (significantly) overestimate bottlenecks/obstacles standing in the way of progress on the thing that we actually care about.
And placing some weight on the prediction that the curve will simply continue[1] seems like a useful heuristic / counterbalance (and has performed well).
(Apologies if what I’d written earlier was unclear about what I believe — I’m not sure if we still notably disagree given the clarification?)
A different way to think about this might be something like:
The drivers that we can point to are generally only part of the picture, and they’re often downstream of some fuzzier higher-level/”meta” force (or a portfolio of forces) like “incentives+...”
It’s usually quite hard to draw a boundary around literally everything that’s causing some growth/progress
It’s also often hard to imagine, from a given point in time, very different ways of driving the thing forward
(e.g. because we’ve yet to discover other ways of making progress, because proxies we’re looking at locally implicitly bake in some unnecessary assumptions about how progress on the thing we care about will get made, etc.)
So our stories about what’s causing some development that we’re observing are often missing important stuff, and sometimes we should trust the extrapolation more than the stories / assume the stories are incomplete
Something like this seems to help explain why views like “the curve we’re observing will (basically) just continue” have seemed surprisingly successful, even when the people holding those “curve go up” views justified their conclusions via apparently incorrect reasoning about the specific drivers of progress. (And so IMO people should place non-trivial weight on stuff like “rough, somewhat naive-seeming extrapolation of the general trends we’re observing[2].”[3])
[See also a classic post on the general topic, and some related discussion here, IIRC: https://www.alignmentforum.org/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress ]
Caveat: I’d add ”...on a big range/ the scale we care about”; at some point, ~any progress would start hitting ~physical limits. But if that point is after the curve reshapes ~everything we care about, then I’m basically ignoring that consideration for now.
Obviously there are caveats. E.g.:
- the metrics we use for such observations can lead us astray in some situations (in particular they might not ~linearly relate to “the true thing we care about”)
- we often have limited data, we shouldn’t be confident that we’re predicting/measuring the right thing, things can in fact change over time and we should also not forget that, etc.
(I think there were nice notes on this here, although I’ve only skimmed and didn’t re-read https://arxiv.org/pdf/2205.15011 )
Also, sometimes we do know what
“and has performed well” seems like a good crux to zoom in on; for which reference class of empirical trends is this true, and how true is it?
It’s hard to disagree with “place some weight”; imo it always makes sense to have some prior that past trends will continue. The question is how much weight to place on this heuristic vs. more gears-level reasoning.
For a random example, observers in 2009 might have mispredicted Spanish GDP over the next ten years if they placed a lot of weight on this prior.
Ah, @Gregory Lewis🔸 says some of the above better. Quoting his comment: