John—yes, it is plausible that there could be selection effects, such that only people with a P(doom) over 1% even bother becoming AI safety researchers.
But this cuts both ways: any ‘forecasting experts’ who think the P(doom) is over 1% might have already become AI safety researchers, rather than remaining general forecasting experts.
Also, I’m a bit baffled by this narrative that there are ‘giant financial and status incentives’ for AI safety researchers to inflate the dangers.
If somebody wanted to become rich and famous, becoming an AI safety researcher wouldn’t even make the Top 1000 list of good career strategies.
The entirety of their job security and status in society depends on the risk being high. You don’t view that as a strong incentive to create the impression that the risk is high?
To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
I wouldn’t expect a lot of scarcity mindset, because there’s a lot of generically in demand talent and experience among AI x-risk orgs. Status may be a more reasonable question, but job security doesn’t really make sense.
John—yes, it is plausible that there could be selection effects, such that only people with a P(doom) over 1% even bother becoming AI safety researchers.
But this cuts both ways: any ‘forecasting experts’ who think the P(doom) is over 1% might have already become AI safety researchers, rather than remaining general forecasting experts.
Also, I’m a bit baffled by this narrative that there are ‘giant financial and status incentives’ for AI safety researchers to inflate the dangers.
If somebody wanted to become rich and famous, becoming an AI safety researcher wouldn’t even make the Top 1000 list of good career strategies.
The entirety of their job security and status in society depends on the risk being high. You don’t view that as a strong incentive to create the impression that the risk is high?
To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
consciously or not
I wouldn’t expect a lot of scarcity mindset, because there’s a lot of generically in demand talent and experience among AI x-risk orgs. Status may be a more reasonable question, but job security doesn’t really make sense.