I’m very surprised that we’re six levels deep into a disagreement and still actively confused about each other’s arguments. I thought our opinions were much more similar. This suggests that we should schedule a time to talk in person, and/or an adversarial collaboration trying to write a version of the argument that you’re thinking of. (The latter might be more efficient than this exchange, while also producing useful public records).
Thanks for the thorough + high-quality engagement, I really appreciate it.
Due to time constraints I’ll just try hit two key points in this reply (even though I don’t think your responses resolved any of the other points for me, which I’m still very surprised by).
If you replace “perfect optimization” with “significantly-better-than-human optimization” in all of my claims, I’d continue to agree with them.
We are already at significantly-better-than-human optimisation, because none of us can take an environment and output a neural network that does well in that environment, but stochastic gradient descent can. We could make SGD many many times better and it still wouldn’t produce a malicious superintelligence when trained on CIFAR, because there just isn’t any gradient pushing it in the direction of intelligence; it’ll train an agent to memorise the dataset far before that. And if the path to tampering is a few dozen steps long, the optimiser won’t find it before the heat death of the universe (because the agent has no concept of tampering to work from, all it knows is CIFAR). So when we’re talking about not-literally-perfect optimisers, you definitely need more than just amazing optimisation and hard-coded objective functions for trouble to occur—you also need lots of information about the world, maybe a bunch of interaction with it, maybe a curriculum. This is where the meat of the argument is, to me.
I think spreading the argument “if we don’t do X, then we are in trouble because of problem Y” seems better. … The former is easier to understand and more likely to be true / correctly reasoned.
I previously said:
I’m still not sure what the value of a “default assumption” is if it’s not predictive, though.
And I still have this confusion. It doesn’t matter if the argument is true and easy to understand if it’s not action-guiding for anyone. Compare the argument: “if we (=humanity) don’t remember to eat food in 2021, then everyone will die”. Almost certainly true. Very easy to understand. Totally skips the key issue, which is why we should assign high enough probability to this specific hypothetical to bother worrying about it.
So then I guess your response is something like “But everyone forgetting to eat food is a crazy scenario, whereas the naive extrapolation of the thing we’re currently doing is the default scenario”. (Also, sorry if this dialogue format is annoying, I found it an easy way to organise my thoughts, but I appreciate that it run the risk of strawmanning you).
To which I respond: there are many ways of naively extrapolating “the thing we are currently doing”. For example, the thing we’re currently doing is building AI with a 100% success record at not taking over the world. So my naive extrapolation says we’ll definitely be fine. Why should I pay any attention to your naive extrapolation?
I then picture you saying: “I’m not using these extrapolations to make probabilistic predictions, so I don’t need to argue that mine is more relevant than yours. I’m merely saying: once our optimisers get really really good, if we give them a hard-coded objective function, things will go badly. Therefore we, as humanity, should do {the set of things which will not lead to really good optimisers training on hard-coded objective functions}.”
To which I firstly say: no, I don’t buy the claim that once our optimisers get really really good, if we give them a hard-coded objective function, “an existential catastrophe almost certainly happens”. For reasons which I described above.
Secondly, even if I do accept your claim, I think I could just point out: “You’ve defined what we should do in terms of its outcomes, but in an explicitly non-probabilistic way. So if the entire ML community hears your argument, agrees with it, and then commits to doing exactly what they were already doing for the next fifty years, you have no grounds to complain, because you have not actually made any probabilistic claims about whether “exactly what they were already doing for the next fifty years” will lead to catastrophe.” So again, why is this argument worth making?
Man, this last point felt really nitpicky, but I don’t know how else to convey my intuitive feeling that there’s some sort of motte and bailey happening in your argument. Again, let’s discuss this higher-bandwidth.
This suggests that we should schedule a time to talk in person, and/or an adversarial collaboration trying to write a version of the argument that you’re thinking of.
Sounds good, I’ll just clarify my position in this response, rather than arguing against your claims.
So then I guess your response is something like “But everyone forgetting to eat food is a crazy scenario, whereas the naive extrapolation of the thing we’re currently doing is the default scenario”.
It’s more like “there isn’t any intellectual work to be done / field building to do / actors to coordinate to get everyone to eat”.
Whereas in the AI case, I don’t know how we’re going to fix the problem I outlined; and as far as I can tell nor does anyone else in the AI community, and therefore there is intellectual work to be done.
We are already at significantly-better-than-human optimisation
Sorry, by optimization there I meant something more like “intelligence”. I don’t really care whether it comes from better SGD, some hardcoded planning algorithm, or a mesa optimizer; the question is whether it is significantly more capable than humans at pursuing goals.
I thought our opinions were much more similar.
I think our predictions of how the world will go concretely are similar; but I’d guess that I’m happier with abstract arguments that depend on fuzzy intuitive concepts than you are, and find them more compelling than more concrete ones that depend on a lot of specific details.
A few more meta points:
I’m very surprised that we’re six levels deep into a disagreement and still actively confused about each other’s arguments. I thought our opinions were much more similar. This suggests that we should schedule a time to talk in person, and/or an adversarial collaboration trying to write a version of the argument that you’re thinking of. (The latter might be more efficient than this exchange, while also producing useful public records).
Thanks for the thorough + high-quality engagement, I really appreciate it.
Due to time constraints I’ll just try hit two key points in this reply (even though I don’t think your responses resolved any of the other points for me, which I’m still very surprised by).
We are already at significantly-better-than-human optimisation, because none of us can take an environment and output a neural network that does well in that environment, but stochastic gradient descent can. We could make SGD many many times better and it still wouldn’t produce a malicious superintelligence when trained on CIFAR, because there just isn’t any gradient pushing it in the direction of intelligence; it’ll train an agent to memorise the dataset far before that. And if the path to tampering is a few dozen steps long, the optimiser won’t find it before the heat death of the universe (because the agent has no concept of tampering to work from, all it knows is CIFAR). So when we’re talking about not-literally-perfect optimisers, you definitely need more than just amazing optimisation and hard-coded objective functions for trouble to occur—you also need lots of information about the world, maybe a bunch of interaction with it, maybe a curriculum. This is where the meat of the argument is, to me.
I previously said:
And I still have this confusion. It doesn’t matter if the argument is true and easy to understand if it’s not action-guiding for anyone. Compare the argument: “if we (=humanity) don’t remember to eat food in 2021, then everyone will die”. Almost certainly true. Very easy to understand. Totally skips the key issue, which is why we should assign high enough probability to this specific hypothetical to bother worrying about it.
So then I guess your response is something like “But everyone forgetting to eat food is a crazy scenario, whereas the naive extrapolation of the thing we’re currently doing is the default scenario”. (Also, sorry if this dialogue format is annoying, I found it an easy way to organise my thoughts, but I appreciate that it run the risk of strawmanning you).
To which I respond: there are many ways of naively extrapolating “the thing we are currently doing”. For example, the thing we’re currently doing is building AI with a 100% success record at not taking over the world. So my naive extrapolation says we’ll definitely be fine. Why should I pay any attention to your naive extrapolation?
I then picture you saying: “I’m not using these extrapolations to make probabilistic predictions, so I don’t need to argue that mine is more relevant than yours. I’m merely saying: once our optimisers get really really good, if we give them a hard-coded objective function, things will go badly. Therefore we, as humanity, should do {the set of things which will not lead to really good optimisers training on hard-coded objective functions}.”
To which I firstly say: no, I don’t buy the claim that once our optimisers get really really good, if we give them a hard-coded objective function, “an existential catastrophe almost certainly happens”. For reasons which I described above.
Secondly, even if I do accept your claim, I think I could just point out: “You’ve defined what we should do in terms of its outcomes, but in an explicitly non-probabilistic way. So if the entire ML community hears your argument, agrees with it, and then commits to doing exactly what they were already doing for the next fifty years, you have no grounds to complain, because you have not actually made any probabilistic claims about whether “exactly what they were already doing for the next fifty years” will lead to catastrophe.” So again, why is this argument worth making?
Man, this last point felt really nitpicky, but I don’t know how else to convey my intuitive feeling that there’s some sort of motte and bailey happening in your argument. Again, let’s discuss this higher-bandwidth.
Just want to say that I’ve found this exchange quite interesting, and would be keen to read an adversarial collaboration between you two on this sort of thing. Seems like that would be a good addition to the set of discussions there’ve been about key cruxes related to AI safety/alignment.
(ETA: Actually, I’ve gone ahead and linked to this comment thread in that list as well, for now, as it was already quite interesting.)
Sounds good, I’ll just clarify my position in this response, rather than arguing against your claims.
It’s more like “there isn’t any intellectual work to be done / field building to do / actors to coordinate to get everyone to eat”.
Whereas in the AI case, I don’t know how we’re going to fix the problem I outlined; and as far as I can tell nor does anyone else in the AI community, and therefore there is intellectual work to be done.
Sorry, by optimization there I meant something more like “intelligence”. I don’t really care whether it comes from better SGD, some hardcoded planning algorithm, or a mesa optimizer; the question is whether it is significantly more capable than humans at pursuing goals.
I think our predictions of how the world will go concretely are similar; but I’d guess that I’m happier with abstract arguments that depend on fuzzy intuitive concepts than you are, and find them more compelling than more concrete ones that depend on a lot of specific details.