Imagine that you think that artificial neural nets can’t reason at all.
Is this a real position that real living intelligent people actually hold, or is it just one of the funny contrarian philosopher beliefs that some philosophers like to around with for fun?
Is this a real position that real living intelligent people actually hold, or is it just one of the funny contrarian philosopher beliefs that some philosophers like to around with for fun?
I think this is really the position of the stochastic parrots people, yes.
I don’t think it’s plausible, but I think it partly explains their relentless opposition to work on AI safety.
I think this is an actual position. It’s the stochastic parrots argument no? Just a recent post by a cognitive scientist holds this belief.
I don’t think there were any factual claims in that article from a skim; entirely just normative claims and a few rhetorical question.