Good question. I think AI researchers views inform/can inform me. A few examples from the recent NLP Community Metasurvey. I would quote bits from this summary.
Few scaling maximalists: 17% agreed that Given resources (i.e., compute and data) that could come to exist this century, scaled-up implementations of established existing techniques will be sufficient to practically solve any important real-world problem or application in NLP.
This was surprsing and updated me somewhat against shorter timelines (and higher risk) as, for example, it clashes with the “+12 OOMS Enough” premise of the Kokotajlo’s argument for short timelines of the Carlsmith report (see also this and his review of Carlsmith report).
NLP is on a path to AGI: 58% agreed that Understanding the potential development of artificial general intelligence (AGI) and the benefits/risks associated with it should be a significant priority for NLP researchers.
Related: 57% agreed that Recent developments in large-scale ML modeling (such as in language modeling and reinforcement learning) are significant steps toward the development of AGI.
If these numbers were significantly lower or higher, it would also probably update my views.
AGI could be catastrophic: 36% agreed that It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.
46% of women and 53% of URM respondents agreed.
The comments suggested that people took a pretty wide range of interpretations to this, including things like OOD robustness failures leading to weapons launches.
This number is puzzling and hard to interpret. It seems appropriate in light of AI Impacts’ What do ML researchers think about AI in 2022? where “48% of respondents gave at least 10% chance of an extremely bad outcome”.
I don’t fully understand what this implies about the ML community’s views on AI alignment. But I can see myself updating positively if these concerns would lead to more safety culture, alignment research, etc.
Good question. I think AI researchers views inform/can inform me. A few examples from the recent NLP Community Metasurvey. I would quote bits from this summary.
This was surprsing and updated me somewhat against shorter timelines (and higher risk) as, for example, it clashes with the “+12 OOMS Enough” premise of the Kokotajlo’s argument for short timelines of the Carlsmith report (see also this and his review of Carlsmith report).
If these numbers were significantly lower or higher, it would also probably update my views.
This number is puzzling and hard to interpret. It seems appropriate in light of AI Impacts’ What do ML researchers think about AI in 2022? where “48% of respondents gave at least 10% chance of an extremely bad outcome”.
I don’t fully understand what this implies about the ML community’s views on AI alignment. But I can see myself updating positively if these concerns would lead to more safety culture, alignment research, etc.