Well, you might disagree, but you’d have to consider yourself likely to be a better predictor than most AI experts.
The lack of consensus doesn’t really change the point because we are looking at a probability distribution either way.
Booms/winters are well known among researchers, they are aware of how it affects the field so I think it’s not so easy to figure out if they’re being biased or not.
I think it’s important to hold “AI development research” and “AI timeline prediction-making” as two separate skillsets. Expertise in one doesn’t necessarily imply expertise in the other (though there’s probably some overlap).
Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.
I think it’s important to hold “AI development research” and “AI timeline prediction-making” as two separate skillsets. Expertise in one doesn’t necessarily imply expertise in the other (though there’s probably some overlap).
OK, that’s true. The problem is, it’s hard to tell if you are better at predicting timelines.
Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.
I think that’s a third issue, not a matter of timeline opinions either.
I think that’s a third issue, not a matter of timeline opinions either.
Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you’d probably get responses ranging from “200 years out” to “AGI? That’s apocalyptic hogwash. Now, if you’d excuse me...”
I don’t know which premise here is more greatly at odds with the real beliefs of AI researchers—that they didn’t worry about AI safety because they didn’t think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.
Well, you might disagree, but you’d have to consider yourself likely to be a better predictor than most AI experts.
The lack of consensus doesn’t really change the point because we are looking at a probability distribution either way.
Booms/winters are well known among researchers, they are aware of how it affects the field so I think it’s not so easy to figure out if they’re being biased or not.
I think it’s important to hold “AI development research” and “AI timeline prediction-making” as two separate skillsets. Expertise in one doesn’t necessarily imply expertise in the other (though there’s probably some overlap).
Any good model of the quality of AI dev researcher timeline opinions needs to be able to explain why AI safety was considered a joke by the field for years, and only started to be taken seriously by (some) AI dev researchers after committed advocacy from outsiders.
OK, that’s true. The problem is, it’s hard to tell if you are better at predicting timelines.
I think that’s a third issue, not a matter of timeline opinions either.
Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you’d probably get responses ranging from “200 years out” to “AGI? That’s apocalyptic hogwash. Now, if you’d excuse me...”
I don’t know which premise here is more greatly at odds with the real beliefs of AI researchers—that they didn’t worry about AI safety because they didn’t think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.