Do you plan on doing any research into the cruxes of disagreement with ML researchers?
I realise that there is some information on this within the qualitative data you collected (which I will admit to not reading all 60 pages of), but it surprises me that this isn’t more of a focus. From my incredibly quick scan (so apologies for any inaccurate conclusions) of the qualitative data, it seems like many of the ML researchers were familiar with basic thinking about safety but seemed to not buy it for reasons that didn’t look fully drawn out.
It seems to me that there is a risky presupposition that the arguments made in the papers you used are correct, and that what matters now is framing. To me, given the proportion of resources EA stakes on AI safety, it would be worth trying to understand why people (particularly knowledgeable ML researchers) have a different set of priorities to many in EA. It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
I am curious to hear your thoughts. I really appreciate the research done here and am very much in favour of more rigorous community/field building being done as you have here.
I’m not going to comment too much here, but if you haven’t seen my talk (“Researcher Perceptions of Current and Future AI” (first 48m; skip the Q&A) (Transcript)), I’d recommend it! Specifically, you want the timechunk 23m-48m in that talk, when I’m talking about the results of interviewing ~100 researchers about AI safety arguments. We’re going to publish much more on this interview data within the next month or so, but the major results are there, which describes some AI researchers cruxes.
To me, given the proportion of resources EA stakes on AI safety, it would be worth trying to understand why people (particularly knowledgeable ML researchers) have a different set of priorities to many in EA. It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
I don’t see this as suspicious, because I suspect different goals are driving EAs compared to AI researchers. I’m not surprised by the fact that they disagree, since even if AI risk is high, if you have a selfish worldview, it’s probably still rational to work on AI research.
Do you plan on doing any research into the cruxes of disagreement with ML researchers?
I realise that there is some information on this within the qualitative data you collected (which I will admit to not reading all 60 pages of), but it surprises me that this isn’t more of a focus. From my incredibly quick scan (so apologies for any inaccurate conclusions) of the qualitative data, it seems like many of the ML researchers were familiar with basic thinking about safety but seemed to not buy it for reasons that didn’t look fully drawn out.
It seems to me that there is a risky presupposition that the arguments made in the papers you used are correct, and that what matters now is framing. To me, given the proportion of resources EA stakes on AI safety, it would be worth trying to understand why people (particularly knowledgeable ML researchers) have a different set of priorities to many in EA. It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
I am curious to hear your thoughts. I really appreciate the research done here and am very much in favour of more rigorous community/field building being done as you have here.
I’m not going to comment too much here, but if you haven’t seen my talk (“Researcher Perceptions of Current and Future AI” (first 48m; skip the Q&A) (Transcript)), I’d recommend it! Specifically, you want the timechunk 23m-48m in that talk, when I’m talking about the results of interviewing ~100 researchers about AI safety arguments. We’re going to publish much more on this interview data within the next month or so, but the major results are there, which describes some AI researchers cruxes.
I don’t see this as suspicious, because I suspect different goals are driving EAs compared to AI researchers. I’m not surprised by the fact that they disagree, since even if AI risk is high, if you have a selfish worldview, it’s probably still rational to work on AI research.