A survey like this is probably a good idea, although it might not give us any evidence that isn’t already publicly available. A non-AI risk expert already has quite a few indicators about MIRI’s quality:
It has gotten several dozen papers accepted to conferences.
Some of these papers have a decent number of citations, many have ~5. (You can find number of citations on Google Scholar but I don’t know a good way to get this information other than just manually searching for papers and looking at the citations.) Many of the citations are by other MIRI papers; most are by MIRI/FHI/CSER/associated people, probably because these are the only groups doing real work on AI risk.
MIRI regularly collaborates with other organizations or individuals working on AI risk, which suggests that these people value MIRI’s contributions.
Stuart Russell, one of the world’s leading AI researchers, sits on the advisory board of MIRI, and appears to have plans to collaborate with MIRI.
If we did a survey like this one, it would probably be largely redundant with the evidence we already have. The people surveyed would need to be AI risk researchers, which pretty much means a small handful of people at MIRI, FHI, FLI, etc. Lots of these people already collaborate with MIRI and cite MIRI papers. Still, we might be able to learn something from hearing their explicit opinions about MIRI, although I don’t know what.
A survey like this is probably a good idea, although it might not give us any evidence that isn’t already publicly available. A non-AI risk expert already has quite a few indicators about MIRI’s quality:
It has gotten several dozen papers accepted to conferences.
Some of these papers have a decent number of citations, many have ~5. (You can find number of citations on Google Scholar but I don’t know a good way to get this information other than just manually searching for papers and looking at the citations.) Many of the citations are by other MIRI papers; most are by MIRI/FHI/CSER/associated people, probably because these are the only groups doing real work on AI risk.
MIRI regularly collaborates with other organizations or individuals working on AI risk, which suggests that these people value MIRI’s contributions.
Stuart Russell, one of the world’s leading AI researchers, sits on the advisory board of MIRI, and appears to have plans to collaborate with MIRI.
If we did a survey like this one, it would probably be largely redundant with the evidence we already have. The people surveyed would need to be AI risk researchers, which pretty much means a small handful of people at MIRI, FHI, FLI, etc. Lots of these people already collaborate with MIRI and cite MIRI papers. Still, we might be able to learn something from hearing their explicit opinions about MIRI, although I don’t know what.