This study wasn’t recruiting AI safety workers? Rather it had AI domain experts, many of whom appeared to have thought about AI x-risk not much more than I’d have expected the median AI researcher to have thought of AI x-risk.[EDIT 2023/07/16: I’m less sure that this is true]
There was a follow up study with both superforecasters and people who have thought about or worked in AI safety (or adjacent fields). I was involved as a participant. That study had some more (though arguably still limited) engagement between the two camps, and I think there was more constructive dialogue and useful updates in comparison.
This study wasn’t recruiting AI safety workers? Rather it had AI domain experts, many of whom appeared to have thought about AI x-risk not much more than I’d have expected the median AI researcher to have thought of AI x-risk.[EDIT 2023/07/16: I’m less sure that this is true]
There was a follow up study with both superforecasters and people who have thought about or worked in AI safety (or adjacent fields). I was involved as a participant. That study had some more (though arguably still limited) engagement between the two camps, and I think there was more constructive dialogue and useful updates in comparison.