My point is that the trajectories affect the endpoints. You have fundamentally misunderstood my entire argument.
Say a rogue, flawed, AI has recently killed ten million people before being stopped. That results in large amounts of regulation, research, and security changes.
This can have two effects:
Firstly,(if AI research isn’t shut down entirely), it makes it more likely that the AI safety problem will be solved due to increased funding and urgency.
Secondly, it makes the difficulty level of future takeover attempts greater, due to awareness of AI tactics, increased monitoring, security, international agreeements, etc.
If the difficulty level increases faster than the AI capabilities can catch up, then humanity wins.
Suppose we end up with a future where every time a rogue AI pops up, there are 1000 equally powerful safe AI’s there to kill it in it’s crib. In this case, scaling up the power levels doesn’t matter: the new, more powerful rogue AI is met by 1000 new, more powerful safe AI’s. At no point does it become world domination capable.
The other possible win condition is that enough death and destruction is wrought by failed AI’s that humanity bands together to ban AI entirely, and successfully enforces this ban.
My point is that the trajectories affect the endpoints. You have fundamentally misunderstood my entire argument.
Say a rogue, flawed, AI has recently killed ten million people before being stopped. That results in large amounts of regulation, research, and security changes.
This can have two effects:
Firstly,(if AI research isn’t shut down entirely), it makes it more likely that the AI safety problem will be solved due to increased funding and urgency.
Secondly, it makes the difficulty level of future takeover attempts greater, due to awareness of AI tactics, increased monitoring, security, international agreeements, etc.
If the difficulty level increases faster than the AI capabilities can catch up, then humanity wins.
Suppose we end up with a future where every time a rogue AI pops up, there are 1000 equally powerful safe AI’s there to kill it in it’s crib. In this case, scaling up the power levels doesn’t matter: the new, more powerful rogue AI is met by 1000 new, more powerful safe AI’s. At no point does it become world domination capable.
The other possible win condition is that enough death and destruction is wrought by failed AI’s that humanity bands together to ban AI entirely, and successfully enforces this ban.