People working specifically on AGI (eg, people at OpenAI, DeepMind) seem especially bullish about transformative AI, even relative to experts not working on AGI. Note that this is not uncontroversial, see eg, criticisms from Jessica Taylor, among others. Note also that there’s a strong selection effect for the people who’re the most bullish on AGI to work on it.
I have several uncertainties about what you meant by this:
Do you include in “People working specifically on AGI” people working on AI safety, or just capabilities?
“bullish” in the sense of “thinking transformative AI (TAI) is coming soon”, or in the sense of “thinking TAI will be great”, or in the sense of “thinking TAI will happen in some discontinuous way”, or something else?
what do you mean by “experts not working on AGI”? Why say “even”—wouldn’t the selection effect you mention mean we’d expect experts not working on AGI to be less bullish? (Though other factors could push in the opposite direction, such as people who are working on a problem realising just how massive and complicated it is.)
Do you include in “People working specifically on AGI” people working on AI safety, or just capabilities?
Just capabilities (in other words, people working to create AGI), although I think the safety/capabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.
“bullish” in the sense of “thinking transformative AI (TAI) is coming soon”
Yes.
what do you mean by “experts not working on AGI”?
AI people who aren’t explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).
Why say “even”
Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe it’s even shorter.
In general it looks like my language choice was more ambiguous than desirable so I’ll edit my answer to be clearer!
Ah, ok. The edits clear everything up for me except that the “even” is meant to be highlighting that this is even shorter than the timelines given in the above paragraph. (Not sure that matters much, though.)
Good answer.
I have several uncertainties about what you meant by this:
Do you include in “People working specifically on AGI” people working on AI safety, or just capabilities?
“bullish” in the sense of “thinking transformative AI (TAI) is coming soon”, or in the sense of “thinking TAI will be great”, or in the sense of “thinking TAI will happen in some discontinuous way”, or something else?
what do you mean by “experts not working on AGI”? Why say “even”—wouldn’t the selection effect you mention mean we’d expect experts not working on AGI to be less bullish? (Though other factors could push in the opposite direction, such as people who are working on a problem realising just how massive and complicated it is.)
Just capabilities (in other words, people working to create AGI), although I think the safety/capabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.
Yes.
AI people who aren’t explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).
Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe it’s even shorter.
In general it looks like my language choice was more ambiguous than desirable so I’ll edit my answer to be clearer!
Ah, ok. The edits clear everything up for me except that the “even” is meant to be highlighting that this is even shorter than the timelines given in the above paragraph. (Not sure that matters much, though.)
I edited that section, let me know if there are remaining points of confusion!