I don’t think it’s restricted only to agentic technologies; my model is for all technologies that involve risk. My toy example is that even producing a knife requires the designer to think about its dangers in advance and propose precautions.
My point is that for example with a knife or gun, the only free variable is the human using it, not the technology itself. Thus the meme approximated an important truth. It’s when we impart intelligence and agency that the meme starts to become very wrong, as we now need to consider another free variable, the technology itself.
So in order for technologies themselves to be risks, they need to be essentially agentic and intelligent in order to become a free variable for risk, rather than the humans wielding them becoming risks.
I don’t think it’s restricted only to agentic technologies; my model is for all technologies that involve risk. My toy example is that even producing a knife requires the designer to think about its dangers in advance and propose precautions.
My point is that for example with a knife or gun, the only free variable is the human using it, not the technology itself. Thus the meme approximated an important truth. It’s when we impart intelligence and agency that the meme starts to become very wrong, as we now need to consider another free variable, the technology itself.
So in order for technologies themselves to be risks, they need to be essentially agentic and intelligent in order to become a free variable for risk, rather than the humans wielding them becoming risks.