Survey of 2018 EA Survey

A few months ago, before the results of the 2018 EA survey came out, I asked if people would be interested in making predictions about the answers, and 35 people completed a survey of the survey. I think that the majority of people answering this were from the group organisers group on Facebook and so probably have a better idea of what the EA community looks like in general, and hopefully by testing our predictions we can improve our calibration.

For each question the respondent was supposed to give lower and upper bounds for which they thought there is an 80% chance that the answer is inside those bounds. All but the first question would have answers as a percentage between 0 and 100.

Here is the program, run by guided track. If you want to see the raw results data for the survey of the survey you can email me using david@ealondon.com.

A few things that might be interesting

  • There was generally overconfident answers with people getting an average of 60% correct whilst aiming for 80%, with only 2 out of 35 people under confident

  • The question with the most correct responses was on the proportion of people identifying as male, 91% got this one right.

  • Most respondents predicted there would be many more people saying they were politically centrist, right or far right than the results suggested

  • It looks like most people thought that the individual cause priorities would be less popular, or that there was more exclusivity between the choices rather than people choosing multiple causes as a top or near top priority. For example the median answer for global poverty was 45% whereas on the 2018 EA survey 66% said that it was a top or near top priority, and the other selected causes also had much lower predictions than the actual results

  • Unsurprisingly, people with wider intervals generally got more answers correct. I haven’t made a score combining these two but that would probably give a better idea of how well individuals are calibrated

The chart below is showing the correct answer percentages and average interval ranges for the 35 participants.

Here is a table looking at each question and seeing what percentage of the 35 gave a correct answer to that question.

Here is a table looking at the median midpoint answer for each question and how that compares to the correct answer, with a positive difference suggesting people thought that the answer would be higher and a negative difference suggesting they thought it would be lower (on average).