This was a really valuable contribution. I think the author had an angle that was definitely worth sharing, and I’m glad Jakub put it on this forum.
It did not cause me to update materially away from worrying about AI alignment. (My estimates of P(doom) vacillate between 0.1% and 10% -- those with a higher P(doom) might have a different reaction)
There are two reasons why I didn’t find this compelling enough to materially change my mind:
1) I wasn’t very convinced by the claim that artificial superintelligence won’t be coherent.
For example, this chart is very suggestive of the author’s main claim, namely that more intelligent things behave less coherently:
However AI is developed in ways which are very different from the way humans and animals came into being, so it’s not at all compelling that AGI will be incoherent.
2) I’m not reassured by the thought of an incoherent superintelligence
The author was very good at making explicit this assumption:
I wasn’t clear on exactly why the author thought it was good for the AI to be incoherent, but I think the argument was that the AI would be self-sabotaging.
I didn’t find this convincing. Sure, humans definitely do self-sabotage, but they still control the earth, and the fate of gorillas and trees is still in the hands of humans, even if trees don’t self-sabotage.
If anything, if it’s true that AGI will be incoherent, then that makes alignment even harder, and makes me more worried.
I agree that an “incoherent superintelligence” does not sound very reassuring. Imagine someone saying this:
I’m not too worried about advanced AI. I think it will be a superintelligent hot mess. By this I mean an extremely powerful machine that has various conflicting goals. What could possibly go wrong?
This was a really valuable contribution. I think the author had an angle that was definitely worth sharing, and I’m glad Jakub put it on this forum.
It did not cause me to update materially away from worrying about AI alignment. (My estimates of P(doom) vacillate between 0.1% and 10% -- those with a higher P(doom) might have a different reaction)
There are two reasons why I didn’t find this compelling enough to materially change my mind:
1) I wasn’t very convinced by the claim that artificial superintelligence won’t be coherent.
For example, this chart is very suggestive of the author’s main claim, namely that more intelligent things behave less coherently:
However AI is developed in ways which are very different from the way humans and animals came into being, so it’s not at all compelling that AGI will be incoherent.
2) I’m not reassured by the thought of an incoherent superintelligence
The author was very good at making explicit this assumption:
I wasn’t clear on exactly why the author thought it was good for the AI to be incoherent, but I think the argument was that the AI would be self-sabotaging.
I didn’t find this convincing. Sure, humans definitely do self-sabotage, but they still control the earth, and the fate of gorillas and trees is still in the hands of humans, even if trees don’t self-sabotage.
If anything, if it’s true that AGI will be incoherent, then that makes alignment even harder, and makes me more worried.
I agree that an “incoherent superintelligence” does not sound very reassuring. Imagine someone saying this: