In this case, the positions from the last bullseyes become reversed. The doomer will argue that that AI might start off incapable, but will quickly evolve into a capable super-AI, following path A. Whereas I will retort that it might get more powerful, but that doesn’t guarantee it will ever actually end up being world domination worthy.
No, the doomer says, “If that AI doesn’t destroy the world, people will build a more capable one.” Current AIs haven’t destroyed the world. So people are trying to build more capable ones.
There is some weird thing here about people trying to predict trajectories, not endpoints; they get as far as describing, in their story, an AI that doesn’t end the world as we know it, and then they stop, satisfied that they’ve refuted the doomer story. But if the world as we know it continues, somebody builds a more powerful AI.
My point is that the trajectories affect the endpoints. You have fundamentally misunderstood my entire argument.
Say a rogue, flawed, AI has recently killed ten million people before being stopped. That results in large amounts of regulation, research, and security changes.
This can have two effects:
Firstly,(if AI research isn’t shut down entirely), it makes it more likely that the AI safety problem will be solved due to increased funding and urgency.
Secondly, it makes the difficulty level of future takeover attempts greater, due to awareness of AI tactics, increased monitoring, security, international agreeements, etc.
If the difficulty level increases faster than the AI capabilities can catch up, then humanity wins.
Suppose we end up with a future where every time a rogue AI pops up, there are 1000 equally powerful safe AI’s there to kill it in it’s crib. In this case, scaling up the power levels doesn’t matter: the new, more powerful rogue AI is met by 1000 new, more powerful safe AI’s. At no point does it become world domination capable.
The other possible win condition is that enough death and destruction is wrought by failed AI’s that humanity bands together to ban AI entirely, and successfully enforces this ban.
No, the doomer says, “If that AI doesn’t destroy the world, people will build a more capable one.” Current AIs haven’t destroyed the world. So people are trying to build more capable ones.
There is some weird thing here about people trying to predict trajectories, not endpoints; they get as far as describing, in their story, an AI that doesn’t end the world as we know it, and then they stop, satisfied that they’ve refuted the doomer story. But if the world as we know it continues, somebody builds a more powerful AI.
My point is that the trajectories affect the endpoints. You have fundamentally misunderstood my entire argument.
Say a rogue, flawed, AI has recently killed ten million people before being stopped. That results in large amounts of regulation, research, and security changes.
This can have two effects:
Firstly,(if AI research isn’t shut down entirely), it makes it more likely that the AI safety problem will be solved due to increased funding and urgency.
Secondly, it makes the difficulty level of future takeover attempts greater, due to awareness of AI tactics, increased monitoring, security, international agreeements, etc.
If the difficulty level increases faster than the AI capabilities can catch up, then humanity wins.
Suppose we end up with a future where every time a rogue AI pops up, there are 1000 equally powerful safe AI’s there to kill it in it’s crib. In this case, scaling up the power levels doesn’t matter: the new, more powerful rogue AI is met by 1000 new, more powerful safe AI’s. At no point does it become world domination capable.
The other possible win condition is that enough death and destruction is wrought by failed AI’s that humanity bands together to ban AI entirely, and successfully enforces this ban.