All of the people you named didn’t have an ML background. Adam and I have CS backgrounds (before we joined CHAI, I was a PhD student in programming languages, while Adam worked in distributed systems iirc). Ben is in international relations. If you were counting Paul, he did a CS theory PhD. I suspect all of us chose the “ML track” because we disagreed with MIRI’s approach and thought that the “ML track” would be more impactful.
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)
Thanks, this seems like an important point, and I’ll edit my comment accordingly. I think I had been aware of at least Paul’s and your backgrounds, but made a mistake by not thinking of this and not distinguishing between your prior backgrounds and what you’re doing now.
(Nitpick: While Ben is doing an international relations PhD now, I think his undergraduate degree was in physics and philosophy.)
I still have the impression there is a larger influx of people with ML backgrounds, but my above comment overstates that effect, and in particular it seems clearly false to suggest that Adam / Paul / you preferring ML-based approaches has a primarily sociological explanation (which my comment at least implicitly does).
(Ironically, I have long been skeptical of the value of MIRI’s agent foundations research, and more optimistic about the value of ML-based approaches to AI safety and Paul’s IDA agenda in particular—though I’m not particularly qualified to make such assessments, certainly less so than e.g. Adam and you -, and my background is in pure maths rather than ML. That maybe could have tipped me off …)