People working specifically on AGI (eg, people at OpenAI, DeepMind) seem especially bullish about transformative AI, even relative to experts not working on AGI. Note that this is not uncontroversial, see eg, criticisms from Jessica Taylor, among others. Note also that thereâs a strong selection effect for the people whoâre the most bullish on AGI to work on it.
I have several uncertainties about what you meant by this:
Do you include in âPeople working specifically on AGIâ people working on AI safety, or just capabilities?
âbullishâ in the sense of âthinking transformative AI (TAI) is coming soonâ, or in the sense of âthinking TAI will be greatâ, or in the sense of âthinking TAI will happen in some discontinuous wayâ, or something else?
what do you mean by âexperts not working on AGIâ? Why say âevenââwouldnât the selection effect you mention mean weâd expect experts not working on AGI to be less bullish? (Though other factors could push in the opposite direction, such as people who are working on a problem realising just how massive and complicated it is.)
Do you include in âPeople working specifically on AGIâ people working on AI safety, or just capabilities?
Just capabilities (in other words, people working to create AGI), although I think the safety/âcapabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.
âbullishâ in the sense of âthinking transformative AI (TAI) is coming soonâ
Yes.
what do you mean by âexperts not working on AGIâ?
AI people who arenât explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).
Why say âevenâ
Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe itâs even shorter.
In general it looks like my language choice was more ambiguous than desirable so Iâll edit my answer to be clearer!
Ah, ok. The edits clear everything up for me except that the âevenâ is meant to be highlighting that this is even shorter than the timelines given in the above paragraph. (Not sure that matters much, though.)
Good answer.
I have several uncertainties about what you meant by this:
Do you include in âPeople working specifically on AGIâ people working on AI safety, or just capabilities?
âbullishâ in the sense of âthinking transformative AI (TAI) is coming soonâ, or in the sense of âthinking TAI will be greatâ, or in the sense of âthinking TAI will happen in some discontinuous wayâ, or something else?
what do you mean by âexperts not working on AGIâ? Why say âevenââwouldnât the selection effect you mention mean weâd expect experts not working on AGI to be less bullish? (Though other factors could push in the opposite direction, such as people who are working on a problem realising just how massive and complicated it is.)
Just capabilities (in other words, people working to create AGI), although I think the safety/âcapabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.
Yes.
AI people who arenât explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).
Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe itâs even shorter.
In general it looks like my language choice was more ambiguous than desirable so Iâll edit my answer to be clearer!
Ah, ok. The edits clear everything up for me except that the âevenâ is meant to be highlighting that this is even shorter than the timelines given in the above paragraph. (Not sure that matters much, though.)
I edited that section, let me know if there are remaining points of confusion!