I feel like the main reasons you shouldn’t trust forecasts from subject matter experts are something like:
external validity: do experts in ML have good forecasts that outperform a reasonable baseline?
AFAIK this is an open question, probably not enough forecasts have resolved yet?
internal validity: do experts in ML have internally consistent predictions? Do they give similar answers at slightly different times when the evidence that has changed is minimal? Do they give similar answers when not subject to framing effects?
AFAIK they’ve failed miserably
base rates: what’s the general reference class we expect to draw from?
I’m not aware of any situation where subject matter experts not incentivized to have good forecasts do noticeably better than trained amateurs with prior forecasting track records.
So like you and steve2152 I’m at least somewhat skeptical of putting too much faith in expert forecasts.
However, in contrast I feel like a lack of theoretical understanding of current ML can’t be that strong evidence against trusting experts here, for the very simple reason that conservation of expected evidence means this implies that we ought to trust forecasts from experts with a theoretical understanding of their models more. And this seems wrong because (among others) it would’ve been wrong 50 years ago to trust experts on GOFAI for their AI timelines!
I feel like the main reasons you shouldn’t trust forecasts from subject matter experts are something like:
external validity: do experts in ML have good forecasts that outperform a reasonable baseline?
AFAIK this is an open question, probably not enough forecasts have resolved yet?
internal validity: do experts in ML have internally consistent predictions? Do they give similar answers at slightly different times when the evidence that has changed is minimal? Do they give similar answers when not subject to framing effects?
AFAIK they’ve failed miserably
base rates: what’s the general reference class we expect to draw from?
I’m not aware of any situation where subject matter experts not incentivized to have good forecasts do noticeably better than trained amateurs with prior forecasting track records.
So like you and steve2152 I’m at least somewhat skeptical of putting too much faith in expert forecasts.
However, in contrast I feel like a lack of theoretical understanding of current ML can’t be that strong evidence against trusting experts here, for the very simple reason that conservation of expected evidence means this implies that we ought to trust forecasts from experts with a theoretical understanding of their models more. And this seems wrong because (among others) it would’ve been wrong 50 years ago to trust experts on GOFAI for their AI timelines!