Excellent question—and apologies I should have been more clear. I’ve listed him because he is of course one of the top computer scientists in deep learning. Also note that I did caveat that “I don’t have a strong sense if each and every one of these items should really be funded, because I have not vetted them thoroughly, but I hope that they might serve as an inspiration for further research”. The idea of this item being that it might be good to just try to convince (and incentivise through funding) one of the top computer scientists in ML to work on AI safety. But I agree maybe there are more people like him that might be better suited. Perhaps you have someone better in mind?
Also, note that many people start out being dismissive of safety, and Cho has been retweeting Miles Brundage quite often recently, so maybe he could be convinced to work on this, especially if given funding to work on e.g. ‘concrete problems in AI safety’. So I wouldn’t rule him out based on anecdotal evidence.
Excellent question—and apologies I should have been more clear. I’ve listed him because he is of course one of the top computer scientists in deep learning. Also note that I did caveat that “I don’t have a strong sense if each and every one of these items should really be funded, because I have not vetted them thoroughly, but I hope that they might serve as an inspiration for further research”. The idea of this item being that it might be good to just try to convince (and incentivise through funding) one of the top computer scientists in ML to work on AI safety. But I agree maybe there are more people like him that might be better suited. Perhaps you have someone better in mind?
Also, note that many people start out being dismissive of safety, and Cho has been retweeting Miles Brundage quite often recently, so maybe he could be convinced to work on this, especially if given funding to work on e.g. ‘concrete problems in AI safety’. So I wouldn’t rule him out based on anecdotal evidence.