Re “Oxford EAs”—Toby Ord is presumably a paradigm of that. In the Great AI Timelines Scare of 2017, I spent some time looking into timelines. His median, then, was 15 years, which has held up pretty well. (And his x-risk probability, as stated in the Precipice, was 10%.)
I think I was wrong in my views on timelines then. But people shouldn’t assume I’m a stand-in for the views of “Oxford EAs”.
I ran a timelines exercise in 2017 with many well known FHI staff (though not including Nick) where the point was to elicit one’s current beliefs for AGI by plotting CDFs. Looking at them now, I can tell you our median dates were: 2024, 2032, 2034, 2034, 2034, 2035, 2054, and 2079. So the median of our medians was (robustly) 2034 (i.e. 17 more years time). I was one of the people who had that date, though people didn’t see each others’ CDFs during the exercise.
I think these have held up well.
So I don’t think Eliezer’s “Oxford EAs” point is correct.
In my memory, the main impetus was a couple of leading AI safety ML researchers started making the case for 5-year timelines. They were broadly qualitatively correct and remarkably insightful (promoting the scaling-first worldview), but obviously quantitatively too aggressive. And AlphaGo and AlphaZero had freaked people out, too.
A lot of other people at the time (including close advisers to OP folks) had 10-20yr timelines. My subjective impression was that people in the OP orbit generally had more aggressive timelines than Ajeya’s report did.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
Toby Ord had a x-risk probability of 10% from AI and about 7% from other causes back then for a total of about 1⁄6.
Reading this, I thought Toby Ord had a total all-cause x-risk probabilitity of 10% back then at first and checked it. Thought this might be helpful since Eliezer specifically mentioned <10% x-risk from AI as very unreasonable.
Re “Oxford EAs”—Toby Ord is presumably a paradigm of that. In the Great AI Timelines Scare of 2017, I spent some time looking into timelines. His median, then, was 15 years, which has held up pretty well. (And his x-risk probability, as stated in the Precipice, was 10%.)
I think I was wrong in my views on timelines then. But people shouldn’t assume I’m a stand-in for the views of “Oxford EAs”.
I ran a timelines exercise in 2017 with many well known FHI staff (though not including Nick) where the point was to elicit one’s current beliefs for AGI by plotting CDFs. Looking at them now, I can tell you our median dates were: 2024, 2032, 2034, 2034, 2034, 2035, 2054, and 2079. So the median of our medians was (robustly) 2034 (i.e. 17 more years time). I was one of the people who had that date, though people didn’t see each others’ CDFs during the exercise.
I think these have held up well.
So I don’t think Eliezer’s “Oxford EAs” point is correct.
What’s the Great AI Timelines Scare of 2017?
In my memory, the main impetus was a couple of leading AI safety ML researchers started making the case for 5-year timelines. They were broadly qualitatively correct and remarkably insightful (promoting the scaling-first worldview), but obviously quantitatively too aggressive. And AlphaGo and AlphaZero had freaked people out, too.
A lot of other people at the time (including close advisers to OP folks) had 10-20yr timelines. My subjective impression was that people in the OP orbit generally had more aggressive timelines than Ajeya’s report did.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
Toby Ord had a x-risk probability of 10% from AI and about 7% from other causes back then for a total of about 1⁄6.
Reading this, I thought Toby Ord had a total all-cause x-risk probabilitity of 10% back then at first and checked it. Thought this might be helpful since Eliezer specifically mentioned <10% x-risk from AI as very unreasonable.