According to Wikipedia, “Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future”. So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.
As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.
Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant?
Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.
According to Wikipedia, “Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future”. So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.
As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.
Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant? Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.