Thank you for this feedback—these are good points! Glad you liked the article.
The way I approached collecting personal judgement based predictions was roughly as follows:
I came up with an initial list of people who are well known in this space
I did some digging on each person on that list to see if any of them had made a prediction in the last few years about the timeline to TAI or similar (some had, but many of them hadn’t)
I reviewed the list of results for any obvious gaps (in terms of either demographic or leanings on the issue) and then iterated on this process
It was in step 3 that I ended up seeking out Robin Hanson’s views. Basically, from my initial list, I ended up with a sample that seemed to be leaning pretty heavily in one direction. I suspected that my process had skewed me towards people with shorter timelines—as someone who is very new to the AI safety community, the people who I have become aware of most quickly have been those who are especially worried about x-risks from AI emerging in the near future.
I wanted to consciously make up for that by deliberately seeking out a few predictions from people who are known to be sceptical about shorter timelines. Robin Hanson may not be as renowned as some of the other researchers included, but did his arguments did receive some attention in the literature and seemed to be worth noting. I thought his perspective ought to be reflected, to provide an example of the Other Position. And as you point out—many sceptics aren’t in the business of providing numerical predictions. The fact that Hanson had put some rough numbers to things made his prediction especially useful for loose comparison purposes.
I agree with what you say about personal predictions needing to be taken with a grain of salt, and the direction they might skew things in, etc. Something I should have perhaps made clearer in this article: I don’t view each source mentioned here as a piece of evidence with equal weight. The choice to include personal views, including Robin Hanson’s, was not to say that we should weight such predictions in our analysis in the same way as e.g. the expert surveys or quantitative models. I just wanted to give a sense of the range of predictions that are out there, pointing to the fact that views like Hanson’s do exist. Based on your comment, I might add some more commentary on how ‘not all evidence is equal’ when I go on to formalise my findings in future work; I think it’s worth making this point clearer.
Thanks again!
Hi Jack, thanks for your comment! I think you’ve raised some really interesting points here.
I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn’t something I have spent much time thinking about yet—indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters—including their timelines! - are both within the scope of what Convergence’s scenario planning work hopes to eventually cover. I’d like to think more about it!
If you have any specific suggestions about how we could approach these issues and explore these dynamics, I’d be really keen to hear them.