Hey Larks. I just want to reiterate first that this was a draft amnesty day draft which is mostly why I didn’t go to the level of detail of concrete examples. I didn’t finish the draft because I was generally quite uncertain about the conclusions. Also I don’t doubt Ajeya did a great job, I was just musing really if ex ante Ajeya should have been chosen to write the report rather than if ex post we’re happy she did. Finally, I have very little technical AI knowledge myself.
I’m unsure if such a critical draft was the best choice for an amnesty day draft in hindsight! Bear in mind I’m far from the best person to ask this question, but my gut feeling is someone (or a group of people) who has formal academic training in machine learning and computational neuroscience etc. EA has money, so we could get the cream of the crop to do research if we really wanted to.
Maybe (these are taken from the google scholar page for AI...):
Geoffrey Hinton—most citations under AI on Google Scholar of anyone. Expertise in ML, as well as cognitive science and computer science. Received the Turing Award. Maybe he’s too busy, but again, money talks.
Terrence Sejnowski—expertise in computational neuroscience and AI. Would this have been a good combo of expertise to do this research?
I think I’ll stop there because you probably get the picture of who I’m thinking about. The above might be terrible options for all I know, but my general point is there are people who live and breath AI/ML and who are reknowned in their field. Should we have tried to make more use of them?
EDIT: It’s certainly possible I underestimated how inter-disciplinary Ajeya’s research is (as per Neel Nanda’s comment) which I agree would reduce the usefulness of the AI experts
According to Wikipedia, “Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future”. So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.
As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.
Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant?
Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.
Hey Larks. I just want to reiterate first that this was a draft amnesty day draft which is mostly why I didn’t go to the level of detail of concrete examples. I didn’t finish the draft because I was generally quite uncertain about the conclusions. Also I don’t doubt Ajeya did a great job, I was just musing really if ex ante Ajeya should have been chosen to write the report rather than if ex post we’re happy she did. Finally, I have very little technical AI knowledge myself.
I’m unsure if such a critical draft was the best choice for an amnesty day draft in hindsight! Bear in mind I’m far from the best person to ask this question, but my gut feeling is someone (or a group of people) who has formal academic training in machine learning and computational neuroscience etc. EA has money, so we could get the cream of the crop to do research if we really wanted to.
Maybe (these are taken from the google scholar page for AI...):
Geoffrey Hinton—most citations under AI on Google Scholar of anyone. Expertise in ML, as well as cognitive science and computer science. Received the Turing Award. Maybe he’s too busy, but again, money talks.
Terrence Sejnowski—expertise in computational neuroscience and AI. Would this have been a good combo of expertise to do this research?
I think I’ll stop there because you probably get the picture of who I’m thinking about. The above might be terrible options for all I know, but my general point is there are people who live and breath AI/ML and who are reknowned in their field. Should we have tried to make more use of them?
EDIT: It’s certainly possible I underestimated how inter-disciplinary Ajeya’s research is (as per Neel Nanda’s comment) which I agree would reduce the usefulness of the AI experts
According to Wikipedia, “Regarding existential risk from artificial intelligence, Hinton typically declines to make predictions more than five years into the future”. So it seems plausible that he is not really interested in AI timelines and forecasting. If this is the case, then I think having Ajeya Cotra write the report is preferable to Geoffrey Hinton.
As a more general point, it is not clear whether good experts in AI are good experts in AGI forecasting. The field of AI is different here from climate change, in that climate change science deals inherently a lot more with forecasting.
Does having good intuitions which neural network architectures make it easier to tell traffic lights apart from dogs in pictures help you assess, whether the orthogonality thesis is true or relevant? Does inventing Boltzmann machines help you decide whether it is plausible that an AGI build in 2042 has mesaoptimization? Probably at least a little bit, but its not clear how far this will go.