A separate question here is why we should care about whether AIs possess “real” understanding, if they are functionally very useful and generally competent. If we can create extremely useful AIs that automate labor on a giant scale, but are existentially safe by virtue of their lack of real understanding of the world, then we should just do that?
We should, but if that means they’ll automate less than otherwise or less efficiently than otherwise, then the short-term financial incentives could outweigh the risks to companies or governments (from their perspectives), and they could push through with risky AIs, anyway.
A separate question here is why we should care about whether AIs possess “real” understanding, if they are functionally very useful and generally competent. If we can create extremely useful AIs that automate labor on a giant scale, but are existentially safe by virtue of their lack of real understanding of the world, then we should just do that?
We should, but if that means they’ll automate less than otherwise or less efficiently than otherwise, then the short-term financial incentives could outweigh the risks to companies or governments (from their perspectives), and they could push through with risky AIs, anyway.