Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016.
Heightened support for research in AI safety by AI researchers themselves seems like a requisite step for providing more resources to AI safety researchers. I’m encouraged that AI researchers are so much more favorable toward AI safety research now than in 2016, (a) because it means AI safety research is more likely to be as important as the EA community claims it is, and (b) because more pressure from academia is necessary (perhaps not sufficient, but necessary) to increase public support of AI safety research.
TL;DR: if AI researchers believe AI safety research is important, then it probably is. Also, for AI safety research to be better supported by the public, it’s probably necessary for AI researchers to want it to have more support.
Heightened support for research in AI safety by AI researchers themselves seems like a requisite step for providing more resources to AI safety researchers. I’m encouraged that AI researchers are so much more favorable toward AI safety research now than in 2016, (a) because it means AI safety research is more likely to be as important as the EA community claims it is, and (b) because more pressure from academia is necessary (perhaps not sufficient, but necessary) to increase public support of AI safety research.
TL;DR: if AI researchers believe AI safety research is important, then it probably is. Also, for AI safety research to be better supported by the public, it’s probably necessary for AI researchers to want it to have more support.
- Munn