My impression is that there hasn’t so much been a shift in views within individual people than the influx of a younger generation who tends to have an ML background and roughly speaking tends to agree more with Paul Christiano than MIRI. Some of them are now somewhat prominent themselves (e.g. Rohin Shah, Adam Gleave, you), and so the distribution of views among the set of perceived “AI risk thought leaders” has changed.
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
(I make a point out of this because I sometimes hear “well if you started out liking math then you join MIRI and if you started out liking ML you join CHAI / OpenAI / DeepMind and that explains the disagreement” and I think that’s not true.)
I don’t recall anyone seriously suggesting there might not be enough time to finish a PhD before AGI appears.
I’ve heard this (might be a Bay Area vs. Europe thing).
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)
My experience matches Ben’s more than yours.
All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
(I make a point out of this because I sometimes hear “well if you started out liking math then you join MIRI and if you started out liking ML you join CHAI / OpenAI / DeepMind and that explains the disagreement” and I think that’s not true.)
I’ve heard this (might be a Bay Area vs. Europe thing).
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)