From my perspective, the meme of “technology isn’t bad in itself, it’s just how you use it” was mostly true so long as non-agentic technology was discovered. Unfortunately for us, the EV of that meme has now flipped sign and become extremely negative due to AGI.
One bad decision can overwhelm all good decisions, and one good decision can overwhelm all bad decisions.
I don’t think it’s restricted only to agentic technologies; my model is for all technologies that involve risk. My toy example is that even producing a knife requires the designer to think about its dangers in advance and propose precautions.
My point is that for example with a knife or gun, the only free variable is the human using it, not the technology itself. Thus the meme approximated an important truth. It’s when we impart intelligence and agency that the meme starts to become very wrong, as we now need to consider another free variable, the technology itself.
So in order for technologies themselves to be risks, they need to be essentially agentic and intelligent in order to become a free variable for risk, rather than the humans wielding them becoming risks.
From my perspective, the meme of “technology isn’t bad in itself, it’s just how you use it” was mostly true so long as non-agentic technology was discovered. Unfortunately for us, the EV of that meme has now flipped sign and become extremely negative due to AGI.
One bad decision can overwhelm all good decisions, and one good decision can overwhelm all bad decisions.
I don’t think it’s restricted only to agentic technologies; my model is for all technologies that involve risk. My toy example is that even producing a knife requires the designer to think about its dangers in advance and propose precautions.
My point is that for example with a knife or gun, the only free variable is the human using it, not the technology itself. Thus the meme approximated an important truth. It’s when we impart intelligence and agency that the meme starts to become very wrong, as we now need to consider another free variable, the technology itself.
So in order for technologies themselves to be risks, they need to be essentially agentic and intelligent in order to become a free variable for risk, rather than the humans wielding them becoming risks.