I like that you admit that your examples are cherry-picked. But I’m actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky’s successes? What did he predict better than other people? What project did MIRI generate that either solved clearly interesting technical problems or got significant publicity in academic/AI circles outside of rationalism/EA? Maybe instead of a comment here this should be a short-form question on the forum.
I like that you admit that your examples are cherry-picked. But I’m actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky’s successes?
While he’s not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn’t a complete track record, but it’s still a very important data-point. It’s a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.
While he’s not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn’t a complete track record, but it’s still a very important data-point.
I definitely do agree with that!
It’s possible I should have emphasized the significance of it more in the post, rather than moving on after just a quick mention at the top.
If it’s of interest: I say a little more about how I think about this, in response to Gwern’s comment below. (To avoid thread-duplicating, people might want to respond there rather than here if they have follow-on thoughts on this point.) My further comment is:
This is certainly a positive aspect of his track-record—that many people have now moved closer to his views. (It also suggests that his writing was, in expectation, a major positive contribution to the project of existential risk reduction—insofar as this writing has helped move people up and we assume this was the right direction to move.) But it doesn’t imply that we should give him many more “Bayes points” to him than we give to the people who moved.
Suppose, for example, that someone says in 2020 that there was a 50% chance of full-scale nuclear war in the next five years. Then—due to Russia’s invasion of Ukraine—most people move their credences upward (although they still remained closer to 0% than 50%). Does that imply the person giving the early warning was better-calibrated than the people who moved their estimates up? I don’t think so. And I think—in this nuclear case—some analysis can be used to justify the view that the person giving the early warning was probably overconfident; they probably didn’t have enough evidence or good enough arguments to actually justify a 50% credence.
It may still be the case that the person giving the early warning (in the hypothetical nuclear case) had some valuable and neglected insights, missed by others, that are well worth paying attention to and seriously reflecting on; but that’s a different matter from believing they were overall well-calibrated or should be deferred to much more than the people who moved.
[[EDIT: Something else it might be worth emphasizing, here, is that I’m not arguing for the view “ignore Eliezer.” It’s closer to “don’t give Eliezer’s views outsized weight, compared to (e.g.) the views of the next dozen people you might be inclined to defer to, and factor in evidence that his risk estimates might have a significant upward bias to them.”]]
I like that you admit that your examples are cherry-picked. But I’m actually curious what a non-cherry-picked track record would show. Can people point to Yudkowsky’s successes? What did he predict better than other people? What project did MIRI generate that either solved clearly interesting technical problems or got significant publicity in academic/AI circles outside of rationalism/EA? Maybe instead of a comment here this should be a short-form question on the forum.
While he’s not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn’t a complete track record, but it’s still a very important data-point. It’s a bit like if he were the first person to say that we should take nuclear war seriously, and then five years later people are starting to build nuclear bombs and academics realize that nuclear war is very plausible.
I definitely do agree with that!
It’s possible I should have emphasized the significance of it more in the post, rather than moving on after just a quick mention at the top.
If it’s of interest: I say a little more about how I think about this, in response to Gwern’s comment below. (To avoid thread-duplicating, people might want to respond there rather than here if they have follow-on thoughts on this point.) My further comment is: