Do your opinion updates extend from individual forecasts to aggregated ones?
I think the best individual forecasters are on average better than the aggregate Metaculus forecasts at the moment they make the prediction. Especially if they spent a while on the prediction. I’m less sure if you account for prediction lag (The Metaculus and community predictions are usually better at incorporating new information), and my assessment for that will depend on a bunch of details.
In particular how reliable do you think is the Metaculus median AGI timeline?
I think as noted by matthew.vandermerwe, the Metaculus question operationalization for “AGI” is very different from what our community typically uses. I don’t have a strong opinion on whether a random AI Safety person will do better on that operationalization.
For something closer to what EAs care about, I’m pretty suspicious of the current forecasts given for existential risk/GCR estimates (for example in the Ragnarok series), and generally do not think existential risk researchers should strongly defer to them (though I suspect the forecasts/comments are good enough that it’s generally worth most xrisk researchers studying the relevant questions to read).
I think the best individual forecasters are on average better than the aggregate Metaculus forecasts at the moment they make the prediction. Especially if they spent a while on the prediction. I’m less sure if you account for prediction lag (The Metaculus and community predictions are usually better at incorporating new information), and my assessment for that will depend on a bunch of details.
I think as noted by matthew.vandermerwe, the Metaculus question operationalization for “AGI” is very different from what our community typically uses. I don’t have a strong opinion on whether a random AI Safety person will do better on that operationalization.
For something closer to what EAs care about, I’m pretty suspicious of the current forecasts given for existential risk/GCR estimates (for example in the Ragnarok series), and generally do not think existential risk researchers should strongly defer to them (though I suspect the forecasts/comments are good enough that it’s generally worth most xrisk researchers studying the relevant questions to read).