I think you drastically overestimate how many chances the AI gets at misalignment, because the trillions of executions will use far, far too little compute per single action to lead to a takeover
The little compute leads to much more once it has escaped!
If we manage to catch an AI doing bad stuff
The point is that we won’t, unless we have many more 9s of reliability in terms of catching such attempts!
I agree that conditional on escaping/rogue internal deployments like this scenario by Buck, with a lot of contributors, it leads to much larger disasters, and if the AI is unaligned, then unless we have an aligned AI that has somewhat similar capabilities, we lose.
My point is more so that you are way overestimating how many chances the AI has to overthrow us before it is aligned.
But the crux might be that I don’t think that we need that much reliability for AI catching, because as long as the AI is caught with non-negligible probability, the AI has to be very cautious, because it is way worse for the AI to be caught than to be successful or the game just ending.
In essence, we probably only need 1, or at most a few times to completely change the game in ways that massively favor alignment success.
You do not need to reliably catch someone if you can make the costs of catching the AI out very painful, and I do suspect we will probably be able to do this (perhaps using trusted AI labor, to use Ryan Greenblatt and Buck’s terms).
This is begging the question! My whole objection is that alignment of ASI hasn’t been established to be possible.
as long as the AI is caught with non-negligible probability, the AI has to be very cautious, because it is way worse for the AI to be caught than to be successful or the game just ending.
So it will worry about being in a kind of panopticon? Seems pretty unlikely. Why should the AI care about being caught any more than it should about any given runtime instance of it being terminated?
The little compute leads to much more once it has escaped!
The point is that we won’t, unless we have many more 9s of reliability in terms of catching such attempts!
I agree that conditional on escaping/rogue internal deployments like this scenario by Buck, with a lot of contributors, it leads to much larger disasters, and if the AI is unaligned, then unless we have an aligned AI that has somewhat similar capabilities, we lose.
My point is more so that you are way overestimating how many chances the AI has to overthrow us before it is aligned.
https://www.lesswrong.com/posts/ceBpLHJDdCt3xfEok/ai-catastrophes-and-rogue-deployments
But the crux might be that I don’t think that we need that much reliability for AI catching, because as long as the AI is caught with non-negligible probability, the AI has to be very cautious, because it is way worse for the AI to be caught than to be successful or the game just ending.
In essence, we probably only need 1, or at most a few times to completely change the game in ways that massively favor alignment success.
You do not need to reliably catch someone if you can make the costs of catching the AI out very painful, and I do suspect we will probably be able to do this (perhaps using trusted AI labor, to use Ryan Greenblatt and Buck’s terms).
This is begging the question! My whole objection is that alignment of ASI hasn’t been established to be possible.
So it will worry about being in a kind of panopticon? Seems pretty unlikely. Why should the AI care about being caught any more than it should about any given runtime instance of it being terminated?