“The analogies establish almost nothing of importance about the behavior and workings of real AIs”
You seem to be saying that there is some alternative that establishes something about “real AIs,” but then you admit these real AIs don’t exist yet, and you’re discussing “expectations of the future” by proxy. I’d like to push back, and say that I think you’re not really proposing an alternative, or that to the extent you are, you’re not actually defending that alternative clearly.
I agree that arguing by analogy to discuss current LLM behavior is less useful than having a working theory of interpretability and LLM cognition—though we don’t have any such theory, as far as I can tell—but I have an even harder time understanding what you’re proposing is a superior way of discussing a future situation that isn’t amenable to that type of theoretical analysis, because we are trying to figure out where we do and do not share intuitions, and which models are or are not appropriate for describing the future technology. And I’m not seeing a gears level model proposed, and I’m not seeing concrete predictions.
Yes, arguing by analogy can certainly be slippery and confusing, and I think it would benefit from grounding in concrete predictions. And use of any specific base rates is deeply contentious, since reference classes are always debateable. But at least it’s clear what the argument is, since it’s an analogy. In opposition to that, arguing by direct appeal to your intuitions, where you claim your views are a “straightforward extrapolation of current trends” is being done without reference to your reasoning process. And that reasoning process, because it doesn’t have a explicit gears level model, is based on informal human reasoning and therefore, as Lakens argues, deeply rooted in metaphor anyways, seems worse—it’s reasoning by analogy with extra steps.
For example, what does “straightforward” convey, when you say “straightforward extrapolation”? Well, the intuition the words build on is that moving straight, as opposed to extrapolating exponentially or discontinuously, is better or simpler. Is that mode of prediction easier to justify than reasoning via analogies to other types of minds? I don’t know, but it’s not obvious, and dismissing one as analogy but seeing the other as “straightforward” seems confused.
You seem to be saying that there is some alternative that establishes something about “real AIs,” but then you admit these real AIs don’t exist yet, and you’re discussing “expectations of the future” by proxy. I’d like to push back, and say that I think you’re not really proposing an alternative, or that to the extent you are, you’re not actually defending that alternative clearly.
I agree that arguing by analogy to discuss current LLM behavior is less useful than having a working theory of interpretability and LLM cognition—though we don’t have any such theory, as far as I can tell—but I have an even harder time understanding what you’re proposing is a superior way of discussing a future situation that isn’t amenable to that type of theoretical analysis, because we are trying to figure out where we do and do not share intuitions, and which models are or are not appropriate for describing the future technology. And I’m not seeing a gears level model proposed, and I’m not seeing concrete predictions.
Yes, arguing by analogy can certainly be slippery and confusing, and I think it would benefit from grounding in concrete predictions. And use of any specific base rates is deeply contentious, since reference classes are always debateable. But at least it’s clear what the argument is, since it’s an analogy. In opposition to that, arguing by direct appeal to your intuitions, where you claim your views are a “straightforward extrapolation of current trends” is being done without reference to your reasoning process. And that reasoning process, because it doesn’t have a explicit gears level model, is based on informal human reasoning and therefore, as Lakens argues, deeply rooted in metaphor anyways, seems worse—it’s reasoning by analogy with extra steps.
For example, what does “straightforward” convey, when you say “straightforward extrapolation”? Well, the intuition the words build on is that moving straight, as opposed to extrapolating exponentially or discontinuously, is better or simpler. Is that mode of prediction easier to justify than reasoning via analogies to other types of minds? I don’t know, but it’s not obvious, and dismissing one as analogy but seeing the other as “straightforward” seems confused.