I didn’t say it was easy! I just said that rational analysis, rational gathering of evidence, etc. can pay dividends.
And indeed, if you go back 5 years and look at what people were saying at the time, some people did do way better than most at predicting what happened.* I happen to remember being at dinner parties in the Bay in late 2018, early 2019 where LWers were discussing the topic of “If, as now seems quite plausible, predicting text is the key to general intelligence & will scale to AGI, what implications does that have?” This may even have been before GPT-2 was public, I don’t remember. Probably it was shortly after.
That’s on hard mode though—to prove my point all I have to do is point out that most of the world has been surprised by the general pace of progress in AI, and in particular progress towards AGI, in the last 5 years. It wasn’t even on the radar for most people. But for some people not only was it on the radar but it was basically what they expected. (MIRI’s timelines haven’t changed much in the last 5 years, I hear, because things have more or less proceeded about as quickly as they thought. Different in the details of course, but not generally slower or faster.)
*And I don’t think they just got lucky. They were well-connected and following the field closely, and took the forecasting job unusually seriously, and were unusually rational as people.
I didn’t say it was easy! I just said that rational analysis, rational gathering of evidence, etc. can pay dividends.
And indeed, if you go back 5 years and look at what people were saying at the time, some people did do way better than most at predicting what happened.* I happen to remember being at dinner parties in the Bay in late 2018, early 2019 where LWers were discussing the topic of “If, as now seems quite plausible, predicting text is the key to general intelligence & will scale to AGI, what implications does that have?” This may even have been before GPT-2 was public, I don’t remember. Probably it was shortly after.
That’s on hard mode though—to prove my point all I have to do is point out that most of the world has been surprised by the general pace of progress in AI, and in particular progress towards AGI, in the last 5 years. It wasn’t even on the radar for most people. But for some people not only was it on the radar but it was basically what they expected. (MIRI’s timelines haven’t changed much in the last 5 years, I hear, because things have more or less proceeded about as quickly as they thought. Different in the details of course, but not generally slower or faster.)
*And I don’t think they just got lucky. They were well-connected and following the field closely, and took the forecasting job unusually seriously, and were unusually rational as people.