I think there is plenty of room for debate about what the curve of AI progress/capabilities will look like, and I mostly skimmed the article in about ~5 minutes, but I don’t think your post’s content justified the title (“exponential AI takeoff is a myth”). “Exponential AI takeoff is currently unsupported” or “the common narrative(s) for exponential AI takeoff is based on flawed premises” are plausible conclusions from this post (even if I don’t necessarily agree with them), but I think the original title would require far more compelling arguments to be justified.
(I won’t get too deep into this, but I think it’s plausible that there is significant “methodological overhang”: humans might just struggle to make progress in some fields of research—especially softer sciences and theory-heavy sciences—because principal-agent problems in research plague the accumulation of reliable knowledge through non-experimental methods.)
Hi Harrison, thanks for stating what I guess a few people are thinking—it’s a bit of a clickbait title. I do think though that the non-exponential growth is much more likely than exponential growth just becuase exponential takeoff would require no constraints on growth while it’s enough if one constraint kicks in (maybe even one I didn’t consider here) to stop exponential growth.
I’d be curious on the methodological overhang though. Are you aware of any posts / articles discussing this further?
I haven’t looked very hard but the short answer is no, I’m not aware of any posts/articles that specifically address the idea of “methodological overhang” (a phrase I hastily made up and in hindsight realize may not be totally logical) as it relates to AI capabilities.
I think there is plenty of room for debate about what the curve of AI progress/capabilities will look like, and I mostly skimmed the article in about ~5 minutes, but I don’t think your post’s content justified the title (“exponential AI takeoff is a myth”). “Exponential AI takeoff is currently unsupported” or “the common narrative(s) for exponential AI takeoff is based on flawed premises” are plausible conclusions from this post (even if I don’t necessarily agree with them), but I think the original title would require far more compelling arguments to be justified.
(I won’t get too deep into this, but I think it’s plausible that there is significant “methodological overhang”: humans might just struggle to make progress in some fields of research—especially softer sciences and theory-heavy sciences—because principal-agent problems in research plague the accumulation of reliable knowledge through non-experimental methods.)
Hi Harrison, thanks for stating what I guess a few people are thinking—it’s a bit of a clickbait title. I do think though that the non-exponential growth is much more likely than exponential growth just becuase exponential takeoff would require no constraints on growth while it’s enough if one constraint kicks in (maybe even one I didn’t consider here) to stop exponential growth.
I’d be curious on the methodological overhang though. Are you aware of any posts / articles discussing this further?
I haven’t looked very hard but the short answer is no, I’m not aware of any posts/articles that specifically address the idea of “methodological overhang” (a phrase I hastily made up and in hindsight realize may not be totally logical) as it relates to AI capabilities.
That being said, I have written about the possibility that our current methods of argumentation and communication could be really suboptimal, here: https://georgetownsecuritystudiesreview.org/2022/11/30/complexity-demands-adaptation-two-proposals-for-facilitating-better-debate-in-international-relations-and-conflict-research/