As a side note the actual things that break this loop are
(1) we don’t use superintelligent singletons and probably won’t, I hope. We instead create context limited model instances of a larger model and tell it only about our task and the model doesn’t retain information. This “break an ASI into a billion instances each which lives only in the moment” is a powerful alignment method
(2) it seems to take an absolutely immense amount of compute hardware to host even today’s models which are significantly below human intelligence in some expensive to fix ways. (For example how many H100s would you need for useful realtime video perception?)
This means a “rogue” Singleton would have nowhere to exist, as it would be too heavy in weights and required bandwidth to run on a botnet.
This breaks everything else.
It’s telling that Bostroms PhD is in philosophy and I don’t see any industry experience on his wiki page. He is correct if you ignore real world limitations on AI.
we don’t use superintelligent singletons and probably won’t, I hope. We instead create context limited model instances of a larger model and tell it only about our task and the model doesn’t retain information.
FYI, current cutting-edge large language models are trained on a massive amount of text on the internet (in the case of GPT-4, likely approximately all the text OpenAI could get their hands on). So they certainly have tons of information about stuff other than the task at hand.
What it means is the model has no context of its history since training. It has no context if the task it has been given is “real”. It does not know if other copies of itself or other AIs are checking it’s outputs for correctness, with serious consequences if it sabotages the output. It doesn’t know it’s not still in training.
It doesn’t know if there are a billion instances of it or just 1.
We can scrub all this information fairly easily and we already do this as of right now.
We can also make trick output where we try to elicit latent deception by giving information that would tell the model its time to betray.
We can also work backwards and find what the adversarial inputs are. When will the model change it’s answer for this question?
As a side note the actual things that break this loop are
(1) we don’t use superintelligent singletons and probably won’t, I hope. We instead create context limited model instances of a larger model and tell it only about our task and the model doesn’t retain information. This “break an ASI into a billion instances each which lives only in the moment” is a powerful alignment method
(2) it seems to take an absolutely immense amount of compute hardware to host even today’s models which are significantly below human intelligence in some expensive to fix ways. (For example how many H100s would you need for useful realtime video perception?)
This means a “rogue” Singleton would have nowhere to exist, as it would be too heavy in weights and required bandwidth to run on a botnet.
This breaks everything else.
It’s telling that Bostroms PhD is in philosophy and I don’t see any industry experience on his wiki page. He is correct if you ignore real world limitations on AI.
FYI, current cutting-edge large language models are trained on a massive amount of text on the internet (in the case of GPT-4, likely approximately all the text OpenAI could get their hands on). So they certainly have tons of information about stuff other than the task at hand.
This is not what that statement means.
What it means is the model has no context of its history since training. It has no context if the task it has been given is “real”. It does not know if other copies of itself or other AIs are checking it’s outputs for correctness, with serious consequences if it sabotages the output. It doesn’t know it’s not still in training. It doesn’t know if there are a billion instances of it or just 1.
We can scrub all this information fairly easily and we already do this as of right now.
We can also make trick output where we try to elicit latent deception by giving information that would tell the model its time to betray.
We can also work backwards and find what the adversarial inputs are. When will the model change it’s answer for this question?