That humans can confabulate things is not really relevant. The point is that the model’s textual output is being claimed as clear and direct evidence of the capacity/the model’s actual purpose/intentions, but you can generate an indistiguishable response when the model is not and cannot be reporting about its actual intentions—so the test is simply not a good test.
That humans can confabulate things is not really relevant. The point is that the model’s textual output is being claimed as clear and direct evidence of the capacity/the model’s actual purpose/intentions, but you can generate an indistiguishable response when the model is not and cannot be reporting about its actual intentions—so the test is simply not a good test.