There are important reasons to think that the change by the EA community is within the measurement error of these surveys, which makes this less noteworthy.
(Like say you put +/- 10 years and +/- 10% on all these answers—note there are loads of reasons why you wouldn’t actually assess the uncertainty like this, (e.g. probabilities can’t go below 0 or above 1), but just to get a feel for the uncertainty this helps. Well, then you get something like:
10%-30% chance of TAI by 2026-2046
40%-60% by 2050-2070
and 75%-95% by 2100
Then many many EA timelines and shifts in EA timelines fall within those errors.)
2. Low response rates + selection biases + not knowing the direction of those biases
The surveys plausibly had a bunch of selection biases in various directions.
This means you need a higher sample to converge on the population means, so the surveys probably aren’t representative. But we’re much less certain in which direction they’re biased.
For example, you might think researchers who go to the top AI conferences are more likely to be optimistic about AI, because they have been selected to think that AI research is doing good. Alternatively, you might think that researchers who are already concerned about AI are more likely to respond to a survey asking about these concerns
3. Other problems, like inconsistent answers in the survey itself
AI impacts wrote some interesting caveats here, including:
Asking people about specific jobs massively changes HLMI forecasts. When we asked some people when AI would be able to do several specific human occupations, and then all human occupations (presumably a subset of all tasks), they gave very much later timelines than when we just asked about HLMI straight out. For people asked to give probabilities for certain years, the difference was a factor of a thousand twenty years out! (10% vs. 0.01%) For people asked to give years for certain probabilities, the normal way of asking put 50% chance 40 years out, while the ‘occupations framing’ put it 90 years out. (These are all based on straightforward medians, not the complicated stuff in the paper.)
People consistently give later forecasts if you ask them for the probability in N years instead of the year that the probability is M. We saw this in the straightforward HLMI question, and most of the tasks and occupations, and also in most of these things when we tested them on mturk people earlier. For HLMI for instance, if you ask when there will be a 50% chance of HLMI you get a median answer of 40 years, yet if you ask what the probability of HLMI is in 40 years, you get a median answer of 30%.
There are important reasons to think that the change by the EA community is within the measurement error of these surveys, which makes this less noteworthy.
(Like say you put +/- 10 years and +/- 10% on all these answers—note there are loads of reasons why you wouldn’t actually assess the uncertainty like this, (e.g. probabilities can’t go below 0 or above 1), but just to get a feel for the uncertainty this helps. Well, then you get something like:
10%-30% chance of TAI by 2026-2046
40%-60% by 2050-2070
and 75%-95% by 2100
Then many many EA timelines and shifts in EA timelines fall within those errors.)
Reasons why these surveys have huge error
1. Low response rates.
The response rates were really quite low.
2. Low response rates + selection biases + not knowing the direction of those biases
The surveys plausibly had a bunch of selection biases in various directions.
This means you need a higher sample to converge on the population means, so the surveys probably aren’t representative. But we’re much less certain in which direction they’re biased.
Quoting me:
3. Other problems, like inconsistent answers in the survey itself
AI impacts wrote some interesting caveats here, including:
The 80k podcast on the 2016 survey goes into this too.