I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was âmore capableâ. Itâs in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of âcapabilityâ is somewhat idiosyncratic to AI research and industry, and Iâm arguing that itâs not particularly useful or clarifying language.
More to my point (though probably orthogonal to your point), I donât think many people would buy this dog, because most people care more about not getting attacked than they do about speed and strength.
As a side note, I donât see why preferences and goals change any of this. Iâm constantly hearing AI (safety) researchers talk about âcapabilities researchâ on todayâs AI systems, but I donât think most of them think those systems have their own preferences and goals. At least not in the sense that a dog has preferences or goals. I just think itâs a word that AI [safety?] researchers use, and I think itâs unclear and unhelpful language.
I think game playing AI is pretty well characterized as having the goal of winning the game, and being more or less capable of achieving that goal at different degrees of training. Maybe I am just too used to this language but it seems very intuitive to me. Do you have any examples of people being confused by it?
I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was âmore capableâ. Itâs in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of âcapabilityâ is somewhat idiosyncratic to AI research and industry, and Iâm arguing that itâs not particularly useful or clarifying language.
More to my point (though probably orthogonal to your point), I donât think many people would buy this dog, because most people care more about not getting attacked than they do about speed and strength.
As a side note, I donât see why preferences and goals change any of this. Iâm constantly hearing AI (safety) researchers talk about âcapabilities researchâ on todayâs AI systems, but I donât think most of them think those systems have their own preferences and goals. At least not in the sense that a dog has preferences or goals. I just think itâs a word that AI [safety?] researchers use, and I think itâs unclear and unhelpful language.
#taboocapabilities
I think game playing AI is pretty well characterized as having the goal of winning the game, and being more or less capable of achieving that goal at different degrees of training. Maybe I am just too used to this language but it seems very intuitive to me. Do you have any examples of people being confused by it?