Let’s conduct a survey on the quality of MIRI’s implementation

Many people here, myself included, are very concerned about the risks from rapidly improving artificial general intelligence (AGI). A significant fraction of people in that camp give to the Machine Intelligence Research Institute, or recommend others do so.

Unfortunately, for those who lack the necessary technical expertise, this is partly an act of faith. I am in some position to evaluate the arguments about whether safe AGI is an important cause. I’m also in some position to evaluate the general competence and trustworthiness of the people working at MIRI. On those counts I am satisfied, though I know not everyone is.

However, I am in a poor position to evaluate:

  • The quality of MIRI’s past research output.

  • Whether their priorities are sensible or clearly dominated by alternatives.

I could probably make some progress if I tried, but in any case, don’t have the time to focus on this one question.
To get around this I have asked a few people who have more technical expertise or inside knowledge for their opinions. But I wish I had access to something more systematic and reliable.
This is a no unique situation—science funding boards are often in a poor position to judge the things they are funding and so have to rely on carefully chosen experts to vet them.
I suggest we conduct a survey of people who are in an unusually good position to know whether MIRI a) is a good investment of skills and money, b) should change its approach in order to do better.
The ideal person to oversee such a survey would:
  1. Have an existing reputation for trustworthiness and confidentiality.

  2. Think that AI risk is an important cause, but have no particular convictions about the best approach or organisation for dealing with it. They shouldn’t have worked for MIRI in the past, but will presumably have some association with the general rationality or AI community.

I suggest the survey have the following traits:
  1. Involve 10-20 people, including a sample of present and past MIRI staff, people at organisations working on related problems (CFAR, FHI, FLI, AI Impacts, CSER, OpenPhil, etc), and largely unconnected math/​AI/​CS researchers.

  2. Results should be compiled by two or three people—ideally with different perspectives—who will summarise the results in such a way that nothing in the final report could identify what any individual wrote (unless they are happy to be named). Their goal should be purely to represent the findings faithfully, given the constraints of brevity and confidentiality.

  3. The survey should ask about:

    1. Quality of past output.

    2. Suitability of staff for their roles.

    3. Quality of current strategy/​priorities.

    4. Quality of operations and other non-research aspects of implementation, etc.

    5. How useful more funding/​staff would be.

    6. Comparison with the value of work done by other related organisations.

    7. Suggestions for how the work or strategy could be improved.

  4. Obviously participants should only comment on what they know about. The survey should link to MIRI’s strategy and recent publications.

  5. MIRI should be able to suggest people to be contacted, but so should the general public through an announcement. They should also have a chance to comment on the survey itself before it goes out. Ideally it would be checked by someone who understand good survey design, as subtle aspects of wording can be important.

  6. It should be impressed on participants the value of being open and thoughtful in their answers for maximising the chances of solving the problem of AI risk in the long run.

If conducted to a high standard I would find this survey convincing, in either direction.
MIRI/​FHI’s survey of expected timelines for the development of artificial intelligence has been a similarly valuable resource for discussing the issue with non-experts over the last few years.
This approach could be applied to other organisations as well. However, I feel it is most pressing for MIRI because i) it is so hard for someone like me to know what to say about the above, ii) they want more money than they currently receive, so the evidence is decision-relevant.
I don’t expect that this project would be prohibitively costly relative to its value. Ideally, it would only take 100-300 hours total, including time spent filling out the survey. MIRI currently spends around $2 million dollars a year—including some highly skilled labour that is probably underpriced—so the opportunity cost would represent under 1% of their annual budget.
If anyone would like to volunteer please do so here. I would be happy to advise, and also try to find funders, if a small grant would be helpful.
Thanks to Ozy for more or less suggesting the above and prompting me to write this.