[...] We describe something the alignment problem and ask, “Do you think this is an important problem? Is it a hard problem? Is it a valuable problem to work on at the moment?” And I think, for all of those answers, the distribution shifted toward it being important and valuable and hard[...] They had five options for how important it is, say. For importance, the top evaluation of importance went from 5% to 20% of people who thought it was the most important. For the value of working on it today, the top category went from 8% to 27%. And for how hard it is, much harder than other things, went from 9% to 26%.
That’s really great news! Hopefully it’s not all talk and we get more mainstream ML research on safety over time.
Minor correction: “much more valuable” (to work on today relative to other problems in the field) went from 1% to 8%. Katja’s numbers in the penultimate quoted sentence seem to come from combining the responses “more valuable” and “much more valuable,” a change from 9% to 27%.
That’s really great news! Hopefully it’s not all talk and we get more mainstream ML research on safety over time.
Minor correction: “much more valuable” (to work on today relative to other problems in the field) went from 1% to 8%. Katja’s numbers in the penultimate quoted sentence seem to come from combining the responses “more valuable” and “much more valuable,” a change from 9% to 27%.
Thanks for the corrections!
Can you tell me exactly which numbers I should change and where?
could be changed to either
or something like
depending on whether you want to preserve Katja’s words or (almost) preserve her numbers.
Agreed!
As Zach pointed out below there might be some mistakes left in the precise numbers, for any quantitative analysis I would suggest reading AI Impacts’ write-up: https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/
AI Impacts also published our 2022 survey’s data!