To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
To explain my disagree-vote, this kind of explanation isn’t a good one in isolation
I could also say it benefits AI developers to downplay[1] risk, as that means their profits and status will be high, and society will have a more positive view of them as people who are developing fantastic technologies rather than raising existential risks
And what makes this a bad explanation is that it is so easy to vary. Like above, you can flip the sign. I can also easily swap out the area for any other existential risk (e.g. Nuclear War or Climate Change), and the argument could run exactly the same.
Of course, I think motivated reasoning is something that exists and may play a role in explaining the gap between superforecasters and experts in this survey. But on the whole I don’t find it convincing without further evidence.
consciously or not