I don’t know the answer, though my initial guess would have been that (within the x-risk ecosystem) “Unusually ‘optimistic’ people being for some reason unusually likely to have given public, quantitative estimates before” is a large factor. I talked about this here. I’d guess the cause is some combination of:
There just aren’t many people giving public quantitative estimates, so noise can dominate.
Noise can also be magnified by social precedent; e.g., if the first person to give a public estimate happened to be an optimist by pure coincidence, that on its own might encourage other optimists to speak up more and pessimists less, which could then cascade.
For a variety of dangerous and novel things, if you say ‘this risk is low-probability, but still high enough to warrant concern’, you’re likelier to sound like a sober, skeptical scientist, while if you say ‘this risk is high-probability’, you’re likelier to sound like a doomsday-prophet crackpot. I think this is an important part of the social forces that caused many scientists and institutions to understate the risk of COVID in Jan/Feb 2020.
Causing an AI panic could have a lot of bad effects, such as (paradoxically) encouraging racing, or (less paradoxically) inspiring poorly-thought-out regulatory interventions. So there’s more reason to keep quiet if your estimates are likelier to panic others. (Again, this may have COVID parallels: I think people were super worried about causing panics at the outset of the pandemic, though I think this made a lot less sense in the case of COVID.)
This bullet point would also skew the survey optimistic, unless people give a lot of weight to ‘it’s much less of a big deal for me to give my pessimistic view here, since there will be a lot of other estimates in the mix’.
Alternatively, maybe pessimists mostly aren’t worried about starting a panic, but are worried about other people accusing them of starting a panic, so they’re more inclined to share their views when they can be anonymous?
Intellectuals in the world at large tend to assume a “default” view along the lines of ‘the status quo continues; things are pretty safe and stable; to the extent things aren’t safe or stable, it’s because of widely known risks with lots of precedent’. If you have a view that’s further from the default, you might be more reluctant to assert that view in public, because you expect more people to disagree, ask for elaborations and justifications, etc. Even if you’re happy to have others criticize and challenge your view, you might not want to put in the extra effort of responding to such criticisms or preemptively elaborating on your reasoning.
For various reasons, optimism about AI seems to correlate with optimism about public AI discourse. E.g., some people are optimists about AI outcomes in part because they think the world is more competent/coordinated/efficient/etc. overall; which could then make you expect fewer downsides and more upside from public discourse.
Of course, this is all looking at only one of several possible explanations for ‘the survey results here look more pessimistic than past public predictions by the x-risk community’. I focus on these to explain one of the reasons I expected to see an effect like this. (The bigger reason is just ‘I talked to people at various orgs over the years and kept getting this impression’.)
I don’t know the answer, though my initial guess would have been that (within the x-risk ecosystem) “Unusually ‘optimistic’ people being for some reason unusually likely to have given public, quantitative estimates before” is a large factor. I talked about this here. I’d guess the cause is some combination of:
There just aren’t many people giving public quantitative estimates, so noise can dominate.
Noise can also be magnified by social precedent; e.g., if the first person to give a public estimate happened to be an optimist by pure coincidence, that on its own might encourage other optimists to speak up more and pessimists less, which could then cascade.
For a variety of dangerous and novel things, if you say ‘this risk is low-probability, but still high enough to warrant concern’, you’re likelier to sound like a sober, skeptical scientist, while if you say ‘this risk is high-probability’, you’re likelier to sound like a doomsday-prophet crackpot. I think this is an important part of the social forces that caused many scientists and institutions to understate the risk of COVID in Jan/Feb 2020.
Causing an AI panic could have a lot of bad effects, such as (paradoxically) encouraging racing, or (less paradoxically) inspiring poorly-thought-out regulatory interventions. So there’s more reason to keep quiet if your estimates are likelier to panic others. (Again, this may have COVID parallels: I think people were super worried about causing panics at the outset of the pandemic, though I think this made a lot less sense in the case of COVID.)
This bullet point would also skew the survey optimistic, unless people give a lot of weight to ‘it’s much less of a big deal for me to give my pessimistic view here, since there will be a lot of other estimates in the mix’.
Alternatively, maybe pessimists mostly aren’t worried about starting a panic, but are worried about other people accusing them of starting a panic, so they’re more inclined to share their views when they can be anonymous?
Intellectuals in the world at large tend to assume a “default” view along the lines of ‘the status quo continues; things are pretty safe and stable; to the extent things aren’t safe or stable, it’s because of widely known risks with lots of precedent’. If you have a view that’s further from the default, you might be more reluctant to assert that view in public, because you expect more people to disagree, ask for elaborations and justifications, etc. Even if you’re happy to have others criticize and challenge your view, you might not want to put in the extra effort of responding to such criticisms or preemptively elaborating on your reasoning.
For various reasons, optimism about AI seems to correlate with optimism about public AI discourse. E.g., some people are optimists about AI outcomes in part because they think the world is more competent/coordinated/efficient/etc. overall; which could then make you expect fewer downsides and more upside from public discourse.
Of course, this is all looking at only one of several possible explanations for ‘the survey results here look more pessimistic than past public predictions by the x-risk community’. I focus on these to explain one of the reasons I expected to see an effect like this. (The bigger reason is just ‘I talked to people at various orgs over the years and kept getting this impression’.)