I disagree with your analysis of “are we that ignorant?”.
For things like nuclear war or financial meltdown, we’ve got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I’m guessing it will take something like $1 billion in focused funding).
With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.
We can see from Moore’s law and from ML progress that we’re on track for something at least as unusual as the industrial revolution.
The stock and bond markets do provide some evidence of predictability, but I’m unsure how good they are at evaluating events that happen much less than once per century.
I disagree with your analysis of “are we that ignorant?”.
For things like nuclear war or financial meltdown, we’ve got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I’m guessing it will take something like $1 billion in focused funding).
With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.
We can see from Moore’s law and from ML progress that we’re on track for something at least as unusual as the industrial revolution.
The stock and bond markets do provide some evidence of predictability, but I’m unsure how good they are at evaluating events that happen much less than once per century.