There’s a giant financial and status incentive for AI safety workers to inflate the dangers. It’s also more likely that someone becomes an ai safety expert if they over-estimate the risk.
This study wasn’t recruiting AI safety workers? Rather it had AI domain experts, many of whom appeared to have thought about AI x-risk not much more than I’d have expected the median AI researcher to have thought of AI x-risk.[EDIT 2023/07/16: I’m less sure that this is true]
There was a follow up study with both superforecasters and people who have thought about or worked in AI safety (or adjacent fields). I was involved as a participant. That study had some more (though arguably still limited) engagement between the two camps, and I think there was more constructive dialogue and useful updates in comparison.
John—yes, it is plausible that there could be selection effects, such that only people with a P(doom) over 1% even bother becoming AI safety researchers.
But this cuts both ways: any ‘forecasting experts’ who think the P(doom) is over 1% might have already become AI safety researchers, rather than remaining general forecasting experts.
Also, I’m a bit baffled by this narrative that there are ‘giant financial and status incentives’ for AI safety researchers to inflate the dangers.
If somebody wanted to become rich and famous, becoming an AI safety researcher wouldn’t even make the Top 1000 list of good career strategies.
The entirety of their job security and status in society depends on the risk being high. You don’t view that as a strong incentive to create the impression that the risk is high?
To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
I wouldn’t expect a lot of scarcity mindset, because there’s a lot of generically in demand talent and experience among AI x-risk orgs. Status may be a more reasonable question, but job security doesn’t really make sense.
There’s a giant financial and status incentive for AI safety workers to inflate the dangers. It’s also more likely that someone becomes an ai safety expert if they over-estimate the risk.
This study wasn’t recruiting AI safety workers? Rather it had AI domain experts, many of whom appeared to have thought about AI x-risk not much more than I’d have expected the median AI researcher to have thought of AI x-risk.[EDIT 2023/07/16: I’m less sure that this is true]
There was a follow up study with both superforecasters and people who have thought about or worked in AI safety (or adjacent fields). I was involved as a participant. That study had some more (though arguably still limited) engagement between the two camps, and I think there was more constructive dialogue and useful updates in comparison.
John—yes, it is plausible that there could be selection effects, such that only people with a P(doom) over 1% even bother becoming AI safety researchers.
But this cuts both ways: any ‘forecasting experts’ who think the P(doom) is over 1% might have already become AI safety researchers, rather than remaining general forecasting experts.
Also, I’m a bit baffled by this narrative that there are ‘giant financial and status incentives’ for AI safety researchers to inflate the dangers.
If somebody wanted to become rich and famous, becoming an AI safety researcher wouldn’t even make the Top 1000 list of good career strategies.
The entirety of their job security and status in society depends on the risk being high. You don’t view that as a strong incentive to create the impression that the risk is high?
To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
consciously or not
I wouldn’t expect a lot of scarcity mindset, because there’s a lot of generically in demand talent and experience among AI x-risk orgs. Status may be a more reasonable question, but job security doesn’t really make sense.