Could you go into a bit more detail about the two linguistic styles you described, perhaps using non-AI examples? My interpretation of them is basically agent-focused vs internal-mechanics-focused, but I’m not sure this is exactly what you mean.
If the above is correct, it seems like you’re basically saying that internal-mechanics-focused descriptions work better for currently existing AI systems, which seems true to me for things like self-driving cars. But for something like AlphaZero, or Stockfish, I think an agentic framing is often actually quite useful:
A chess/Go AI is easy to imagine: they are smart and autonomous and you can trust the bot like you trust a human player. They can make mistakes but probably have good intent. When they encounter an unfamiliar game situation they can think about the correct way to proceed. They behave in concordance with the goal (winning the game) their creator set them and they tend to make smart decisions. If anything goes wrong then the car is at fault.
So I think the reason this type of language doesn’t work well for self-driving cars is because they aren’t sufficiently agent-like. But we know there can be agentic agents—humans are an example—so it seems plausible to me that agentic language will be the best descriptor for them. Certainly it is currently the best descriptor for them, given that we do not understand the internal mechanics of as-yet-uninvented AIs.
Could you go into a bit more detail about the two linguistic styles you described, perhaps using non-AI examples? My interpretation of them is basically agent-focused vs internal-mechanics-focused, but I’m not sure this is exactly what you mean.
If the above is correct, it seems like you’re basically saying that internal-mechanics-focused descriptions work better for currently existing AI systems, which seems true to me for things like self-driving cars. But for something like AlphaZero, or Stockfish, I think an agentic framing is often actually quite useful:
So I think the reason this type of language doesn’t work well for self-driving cars is because they aren’t sufficiently agent-like. But we know there can be agentic agents—humans are an example—so it seems plausible to me that agentic language will be the best descriptor for them. Certainly it is currently the best descriptor for them, given that we do not understand the internal mechanics of as-yet-uninvented AIs.