Yup, MIRI is in a fairly unique situation with respect to the inscrutability of its research to a large and interested technical audience.
The proposal makes sense to me, though I think that if you want people to trust a survey, you need to exclude the organisation that’s subject of the survey from any involvement in the survey, including suggesting survey recipients.
One challenge is that in the case of AI researchers, it might be hard to assess whether they have a particular conviction regarding how to deal with AI risk because they are likely to have views about useful approaches to problems in AI and to have also thought about AI risk at least a little bit before, and you’d need to come up with some idea of how to judge bias.
The report should break out any differences between past/present employees and those suggested by MIRI, vs others. I think you need a mix of both insiders and outsiders to get a overall picture.
Yup, MIRI is in a fairly unique situation with respect to the inscrutability of its research to a large and interested technical audience.
The proposal makes sense to me, though I think that if you want people to trust a survey, you need to exclude the organisation that’s subject of the survey from any involvement in the survey, including suggesting survey recipients.
One challenge is that in the case of AI researchers, it might be hard to assess whether they have a particular conviction regarding how to deal with AI risk because they are likely to have views about useful approaches to problems in AI and to have also thought about AI risk at least a little bit before, and you’d need to come up with some idea of how to judge bias.
The report should break out any differences between past/present employees and those suggested by MIRI, vs others. I think you need a mix of both insiders and outsiders to get a overall picture.