Just want to say that I’ve found this exchange quite interesting, and would be keen to read an adversarial collaboration between you two on this sort of thing. Seems like that would be a good addition to the set of discussions there’ve been about key cruxes related to AI safety/​alignment.
(ETA: Actually, I’ve gone ahead and linked to this comment thread in that list as well, for now, as it was already quite interesting.)
Just want to say that I’ve found this exchange quite interesting, and would be keen to read an adversarial collaboration between you two on this sort of thing. Seems like that would be a good addition to the set of discussions there’ve been about key cruxes related to AI safety/​alignment.
(ETA: Actually, I’ve gone ahead and linked to this comment thread in that list as well, for now, as it was already quite interesting.)