I admire the motivation, but worry about selection effects.
I’d guess the median computer science professor hasn’t heard about MIRI’s work. Within the class of people who know about MIRI-esque issues, I’d guess knowledge of MIRI and enthusiasm about MIRI will be correlated: if you think FAI is akin to overpopulation on mars, you probably won’t be paying close attention to the field. Thus those in a position to comment intelligently on MIRI’s work will be selected (in part) for being favourably disposed to the idea behind it.
That isn’t necessarily a showstopper, and it may be worth doing regardless. Perhaps multiple different attempts to gather (‘survey’ might be too strong a term) relevant opinion on the various points could be a good strategy. E.g.
Similar to the FHI/​MIRI timelines research, interrogating computer scientists as to their perception of AI risk, and the importance of alignment etc. would be helpful data.
Folks at MIRI and peer organisations could provide impressions of their organizational efficacy. This sort of ‘organisational peer review’ could be helpful for MIRI to improve. Reciprocal arrangements between groups within EA reviewing each others performance and suggesting improvements could be a valuable activity going forward.
For technical facility, one obvious port of call would be academics who remarked on the probabilistic set theory paper, as well as MIRI workshop participants (especially those who did not end up working at MIRI). As a general metric (given MIRI’s focus on research) a comparison of number of publications/​$ or FTE research staff to other academic bodies would be interesting. My hunch is this would be unflattering to MIRI (especially when narrowing down to more technical/​math heavy work) - but naively looking at publication count may do MIRI a disservice, given it is looking at weird and emergent branches of science.
Another possibility, instead of surveying people who already know about MIRI (and thus selection worries) is to pay someone independent to get to know about them. I know Givewell made a fairly adverse review of MIRIs performance a few years ago. I’d be interested to hear what they think about them now. I’m unaware of ‘academic auditors’, but it might not be unduly costly to commission domain experts to have a look at the relevant issues. Someone sceptical of MIRI might suggest that usually this function is performed by academia generally, and MIRI’s relatively weak connection to academia at large in these technical fields is a black mark against it (albeit one I know they are working to correct).
I admire the motivation, but worry about selection effects.
I’d guess the median computer science professor hasn’t heard about MIRI’s work. Within the class of people who know about MIRI-esque issues, I’d guess knowledge of MIRI and enthusiasm about MIRI will be correlated: if you think FAI is akin to overpopulation on mars, you probably won’t be paying close attention to the field. Thus those in a position to comment intelligently on MIRI’s work will be selected (in part) for being favourably disposed to the idea behind it.
That isn’t necessarily a showstopper, and it may be worth doing regardless. Perhaps multiple different attempts to gather (‘survey’ might be too strong a term) relevant opinion on the various points could be a good strategy. E.g.
Similar to the FHI/​MIRI timelines research, interrogating computer scientists as to their perception of AI risk, and the importance of alignment etc. would be helpful data.
Folks at MIRI and peer organisations could provide impressions of their organizational efficacy. This sort of ‘organisational peer review’ could be helpful for MIRI to improve. Reciprocal arrangements between groups within EA reviewing each others performance and suggesting improvements could be a valuable activity going forward.
For technical facility, one obvious port of call would be academics who remarked on the probabilistic set theory paper, as well as MIRI workshop participants (especially those who did not end up working at MIRI). As a general metric (given MIRI’s focus on research) a comparison of number of publications/​$ or FTE research staff to other academic bodies would be interesting. My hunch is this would be unflattering to MIRI (especially when narrowing down to more technical/​math heavy work) - but naively looking at publication count may do MIRI a disservice, given it is looking at weird and emergent branches of science.
Another possibility, instead of surveying people who already know about MIRI (and thus selection worries) is to pay someone independent to get to know about them. I know Givewell made a fairly adverse review of MIRIs performance a few years ago. I’d be interested to hear what they think about them now. I’m unaware of ‘academic auditors’, but it might not be unduly costly to commission domain experts to have a look at the relevant issues. Someone sceptical of MIRI might suggest that usually this function is performed by academia generally, and MIRI’s relatively weak connection to academia at large in these technical fields is a black mark against it (albeit one I know they are working to correct).