Oh, I thought you had much more intense things in mind than that. Malicious actor using LLMs in some hacking scheme to get security breaches seems probable to me.
But that wouldn’t cause instability to go above baseline. Things like this happen every year. Russia invaded Ukraine last year, for example—for the world to generally become less stable there needs to be either events that are a much bigger deal than that invasion, or events like that invasion happening every few months.
I guess that really depends on how deep this particular problem runs. If it makes most big companies very vulnerable since most employees use LLMs which are susceptible to prompt injections, I’d expect this to cause more chaos in the US than Russia’s invasion of Ukraine. I think we’re talking slightly past each other though, I wanted to make the point that the baseline (non-existential) chaos from agentic AI should be high since near term, non-agentic AI may already cause a lot of chaos. I was not comparing it to other causes of chaos; though I’m very uncertain about how these will compare.
I’m surprised btw that you don’t expect a (sufficient) fire alarm solely on the basis of short timelines. To me, the relevant issue seems more ‘how many more misaligned AIs with what level of capabilities will be deployed before takeoff’. Since a lot more models with higher capabilities got deployed recently, it doesn’t change the picture for me. If anything, I expect non-existential disasters before takeoff more since the last few months since AI companies seem to just release every model & new feature they got. I’d also expect a slow takeoff of misaligned AI to raise the chances of a loud warning shot & the general public having a Covid-in-Feb-2020-wake-up-moment on the issue.
I definitely agree that near term, non-agentic AI will cause a lot of chaos. I just don’t expect it to be so much chaos that the world as a whole feels significantly more chaotic than usual. But I also agree that might happen too.
I also agree that this sort of thing will have a warning-shot effect that makes a Covid-in-feb-2020-type response plausible.
It seems we maybe don’t actually disagree that much?
Oh, I thought you had much more intense things in mind than that. Malicious actor using LLMs in some hacking scheme to get security breaches seems probable to me.
But that wouldn’t cause instability to go above baseline. Things like this happen every year. Russia invaded Ukraine last year, for example—for the world to generally become less stable there needs to be either events that are a much bigger deal than that invasion, or events like that invasion happening every few months.
I guess that really depends on how deep this particular problem runs. If it makes most big companies very vulnerable since most employees use LLMs which are susceptible to prompt injections, I’d expect this to cause more chaos in the US than Russia’s invasion of Ukraine. I think we’re talking slightly past each other though, I wanted to make the point that the baseline (non-existential) chaos from agentic AI should be high since near term, non-agentic AI may already cause a lot of chaos. I was not comparing it to other causes of chaos; though I’m very uncertain about how these will compare.
I’m surprised btw that you don’t expect a (sufficient) fire alarm solely on the basis of short timelines. To me, the relevant issue seems more ‘how many more misaligned AIs with what level of capabilities will be deployed before takeoff’. Since a lot more models with higher capabilities got deployed recently, it doesn’t change the picture for me. If anything, I expect non-existential disasters before takeoff more since the last few months since AI companies seem to just release every model & new feature they got. I’d also expect a slow takeoff of misaligned AI to raise the chances of a loud warning shot & the general public having a Covid-in-Feb-2020-wake-up-moment on the issue.
I definitely agree that near term, non-agentic AI will cause a lot of chaos. I just don’t expect it to be so much chaos that the world as a whole feels significantly more chaotic than usual. But I also agree that might happen too.
I also agree that this sort of thing will have a warning-shot effect that makes a Covid-in-feb-2020-type response plausible.
It seems we maybe don’t actually disagree that much?