However, would it have been helpful to know that 85% of researchers recommend research agenda Y or person X?
TL;DR: No. (I know this is an annoying unintuitive answer)
I wouldn’t be surprised if 85% of researchers think that it would be a good idea to advance capabilities (or do some research that directly advances capabilities and does not have a “full” safety theory of change), and they’ll give you some reason that sounds very wrong to me. I’m assuming you interview anyone who sees themselves as working on “AI Safety”.
[I don’t actually know if this statistic would be true, but it’s a kind example of how your survey suggestion might go wrong imo]
Thanks, that’s helpful to know. It’s a surprise to me though! You’re the first person I have discussed this with who didn’t think it would be useful to know which research agendas were more widely supported.
Just to check, would your institution change if the people being survey were only people who had worked at AI organisations, or if you could filter to only see the aggregate ratings from people who you thought were most credible (e.g., these 10 researchers)?
As an aside, I’ll also mention that I think it would be a very helpful and interesting finding if we found that 85% of researchers thought that it would be a good idea to advance capabilities (or do some research that directly advances capabilities and does not have a “full” safety theory of change). That would make me change my mind on a lot of things and probably spark a lot of important debate that probably wouldn’t otherwise have happened.
TL;DR: No. (I know this is an annoying unintuitive answer)
I wouldn’t be surprised if 85% of researchers think that it would be a good idea to advance capabilities (or do some research that directly advances capabilities and does not have a “full” safety theory of change), and they’ll give you some reason that sounds very wrong to me. I’m assuming you interview anyone who sees themselves as working on “AI Safety”.
[I don’t actually know if this statistic would be true, but it’s a kind example of how your survey suggestion might go wrong imo]
Thanks, that’s helpful to know. It’s a surprise to me though! You’re the first person I have discussed this with who didn’t think it would be useful to know which research agendas were more widely supported.
Just to check, would your institution change if the people being survey were only people who had worked at AI organisations, or if you could filter to only see the aggregate ratings from people who you thought were most credible (e.g., these 10 researchers)?
As an aside, I’ll also mention that I think it would be a very helpful and interesting finding if we found that 85% of researchers thought that it would be a good idea to advance capabilities (or do some research that directly advances capabilities and does not have a “full” safety theory of change). That would make me change my mind on a lot of things and probably spark a lot of important debate that probably wouldn’t otherwise have happened.