Note that if you take observations of tic-tac-toe superintelligent ANI (plays the way we know it would play, we can tie with it if we play first), then of AlphaZero chess, then of top go bots, and extrapolate out along the dimension of how rich the strategy space of the domain is (as per Eliezer’s comment), I think you get a different overall takeaway than the one in this post.
The situation in go looks different:
New top-of-the-line go bots play different openings than we had in 2015, including ideas we thought were bad (e.g. early 3-3 invasions).
Humans have now adopted go bot opening sequences.
That said:
Go bots are not literally omnipotent, there is a handicap level at which humans can beat them.
We can gain insight into go bot play by looking at readouts and thinking for ages.
It is possible for humans to beat top go bots in a fair fight…
… but the way that happened was by training an adversary go bot to specifically do that, and copying that adversary’s play style.
Note that if you take observations of tic-tac-toe superintelligent ANI (plays the way we know it would play, we can tie with it if we play first), then of AlphaZero chess, then of top go bots, and extrapolate out along the dimension of how rich the strategy space of the domain is (as per Eliezer’s comment), I think you get a different overall takeaway than the one in this post.