I’m wondering how you went about sampling your list of personal judgement based predictions. Did you list all the predictions you could find, or did you vet them for notability? Like, while I am generally on Robin Hanson’s side here and think his arguments are worth hearing out, it feels weird to have him there alongside top-level AI researchers.
I guess I’m saying that the personal predictions should probably be taken with a grain of salt, as they are skewed towards what is available and known among an EA/Rationalist crowd. And also because said crowd is far more likely to make a numerical prediction, while skeptics are less in the habit of doing that.
Thank you for this feedback—these are good points! Glad you liked the article.
The way I approached collecting personal judgement based predictions was roughly as follows:
I came up with an initial list of people who are well known in this space
I did some digging on each person on that list to see if any of them had made a prediction in the last few years about the timeline to TAI or similar (some had, but many of them hadn’t)
I reviewed the list of results for any obvious gaps (in terms of either demographic or leanings on the issue) and then iterated on this process
It was in step 3 that I ended up seeking out Robin Hanson’s views. Basically, from my initial list, I ended up with a sample that seemed to be leaning pretty heavily in one direction. I suspected that my process had skewed me towards people with shorter timelines—as someone who is very new to the AI safety community, the people who I have become aware of most quickly have been those who are especially worried about x-risks from AI emerging in the near future.
I wanted to consciously make up for that by deliberately seeking out a few predictions from people who are known to be sceptical about shorter timelines. Robin Hanson may not be as renowned as some of the other researchers included, but did his arguments did receive some attention in the literature and seemed to be worth noting. I thought his perspective ought to be reflected, to provide an example of the Other Position. And as you point out—many sceptics aren’t in the business of providing numerical predictions. The fact that Hanson had put some rough numbers to things made his prediction especially useful for loose comparison purposes.
I agree with what you say about personal predictions needing to be taken with a grain of salt, and the direction they might skew things in, etc. Something I should have perhaps made clearer in this article: I don’t view each source mentioned here as a piece of evidence with equal weight. The choice to include personal views, including Robin Hanson’s, was not to say that we should weight such predictions in our analysis in the same way as e.g. the expert surveys or quantitative models. I just wanted to give a sense of the range of predictions that are out there, pointing to the fact that views like Hanson’s do exist. Based on your comment, I might add some more commentary on how ‘not all evidence is equal’ when I go on to formalise my findings in future work; I think it’s worth making this point clearer.
Great article, thanks for writing this up!
I’m wondering how you went about sampling your list of personal judgement based predictions. Did you list all the predictions you could find, or did you vet them for notability? Like, while I am generally on Robin Hanson’s side here and think his arguments are worth hearing out, it feels weird to have him there alongside top-level AI researchers.
I guess I’m saying that the personal predictions should probably be taken with a grain of salt, as they are skewed towards what is available and known among an EA/Rationalist crowd. And also because said crowd is far more likely to make a numerical prediction, while skeptics are less in the habit of doing that.
Thank you for this feedback—these are good points! Glad you liked the article.
The way I approached collecting personal judgement based predictions was roughly as follows:
I came up with an initial list of people who are well known in this space
I did some digging on each person on that list to see if any of them had made a prediction in the last few years about the timeline to TAI or similar (some had, but many of them hadn’t)
I reviewed the list of results for any obvious gaps (in terms of either demographic or leanings on the issue) and then iterated on this process
It was in step 3 that I ended up seeking out Robin Hanson’s views. Basically, from my initial list, I ended up with a sample that seemed to be leaning pretty heavily in one direction. I suspected that my process had skewed me towards people with shorter timelines—as someone who is very new to the AI safety community, the people who I have become aware of most quickly have been those who are especially worried about x-risks from AI emerging in the near future.
I wanted to consciously make up for that by deliberately seeking out a few predictions from people who are known to be sceptical about shorter timelines. Robin Hanson may not be as renowned as some of the other researchers included, but did his arguments did receive some attention in the literature and seemed to be worth noting. I thought his perspective ought to be reflected, to provide an example of the Other Position. And as you point out—many sceptics aren’t in the business of providing numerical predictions. The fact that Hanson had put some rough numbers to things made his prediction especially useful for loose comparison purposes.
I agree with what you say about personal predictions needing to be taken with a grain of salt, and the direction they might skew things in, etc. Something I should have perhaps made clearer in this article: I don’t view each source mentioned here as a piece of evidence with equal weight. The choice to include personal views, including Robin Hanson’s, was not to say that we should weight such predictions in our analysis in the same way as e.g. the expert surveys or quantitative models. I just wanted to give a sense of the range of predictions that are out there, pointing to the fact that views like Hanson’s do exist. Based on your comment, I might add some more commentary on how ‘not all evidence is equal’ when I go on to formalise my findings in future work; I think it’s worth making this point clearer.
Thanks again!