This isn’t much independent evidence I think: seems unlikely that you could become director of MIRI unless you agreed. (I know that there’s a lot of internal disagreement at other levels.)
My point has little to do with him being the director of MIRI per se.
I suppose I could be wrong about this, but my impression is that Nate Soares is among the top 10 most talented/insightful people with elaborate inside view and years of research experience in AI alignment. He also seems to agree with Yudkowsky on a whole lot of issues and predicts about the same p(doom) for about the same reasons. And I feel that many people don’t give enough thought to the fact that while e.g. Paul Christiano has interacted a lot with Yudkowsky and disagreed with him on many key issues (while agreeing on many others), there’s also Nate Soares, who broadly agrees with Yudkowsky’s models that predict very high p(doom).
Another, more minor point: if someone is bringing up Yudkowsky’s track record in the context of his extreme views on AI risk, it seems helpful to talk about Soares’ track record as well.
I think this maybe argues against a point not made in the OP. Garfinkel isn’t saying “disregard Yudkowsky’s views”—rather he’s saying “don’t give them extra weight just because Yudkowsky’s the one saying them”.
For example, from his reply to Richard Ngo:
I think it’s really important to seperate out the question “Is Yudkowsky an unusually innovative thinker?” and the question “Is Yudkowsky someone whose credences you should give an unusual amount of weight to?”
I read your comment as arguing for the former, which I don’t disagree with. But that doesn’t mean that people should currently weigh his risk estimates more highly than they weigh the estimates of other researchers currently in the space
So at least from Garfinkel’s perspective, Yudkowsky and Soares do count as data points, they’re just equal in weight to other relevant data points.
(I’m not expressing any of my own, mostly unformed, views here)
This isn’t much independent evidence I think: seems unlikely that you could become director of MIRI unless you agreed. (I know that there’s a lot of internal disagreement at other levels.)
My point has little to do with him being the director of MIRI per se.
I suppose I could be wrong about this, but my impression is that Nate Soares is among the top 10 most talented/insightful people with elaborate inside view and years of research experience in AI alignment. He also seems to agree with Yudkowsky on a whole lot of issues and predicts about the same p(doom) for about the same reasons. And I feel that many people don’t give enough thought to the fact that while e.g. Paul Christiano has interacted a lot with Yudkowsky and disagreed with him on many key issues (while agreeing on many others), there’s also Nate Soares, who broadly agrees with Yudkowsky’s models that predict very high p(doom).
Another, more minor point: if someone is bringing up Yudkowsky’s track record in the context of his extreme views on AI risk, it seems helpful to talk about Soares’ track record as well.
I think this maybe argues against a point not made in the OP. Garfinkel isn’t saying “disregard Yudkowsky’s views”—rather he’s saying “don’t give them extra weight just because Yudkowsky’s the one saying them”.
For example, from his reply to Richard Ngo:
So at least from Garfinkel’s perspective, Yudkowsky and Soares do count as data points, they’re just equal in weight to other relevant data points.
(I’m not expressing any of my own, mostly unformed, views here)
Ben has said this about Eliezer, but not about Nate, AFAIK.