Just want to say that Iāve found this exchange quite interesting, and would be keen to read an adversarial collaboration between you two on this sort of thing. Seems like that would be a good addition to the set of discussions thereāve been about key cruxes related to AI safety/āalignment.
(ETA: Actually, Iāve gone ahead and linked to this comment thread in that list as well, for now, as it was already quite interesting.)
Current theme: default
Less Wrong (text)
Less Wrong (link)
Just want to say that Iāve found this exchange quite interesting, and would be keen to read an adversarial collaboration between you two on this sort of thing. Seems like that would be a good addition to the set of discussions thereāve been about key cruxes related to AI safety/āalignment.
(ETA: Actually, Iāve gone ahead and linked to this comment thread in that list as well, for now, as it was already quite interesting.)