While he’s not single-handedly responsible, he lead the movement to take AI risk seriously at a time when approximately no one was talking about it, which has now attracted the interests of top academics. This isn’t a complete track record, but it’s still a very important data-point.
I definitely do agree with that!
It’s possible I should have emphasized the significance of it more in the post, rather than moving on after just a quick mention at the top.
If it’s of interest: I say a little more about how I think about this, in response to Gwern’s comment below. (To avoid thread-duplicating, people might want to respond there rather than here if they have follow-on thoughts on this point.) My further comment is:
This is certainly a positive aspect of his track-record—that many people have now moved closer to his views. (It also suggests that his writing was, in expectation, a major positive contribution to the project of existential risk reduction—insofar as this writing has helped move people up and we assume this was the right direction to move.) But it doesn’t imply that we should give him many more “Bayes points” to him than we give to the people who moved.
Suppose, for example, that someone says in 2020 that there was a 50% chance of full-scale nuclear war in the next five years. Then—due to Russia’s invasion of Ukraine—most people move their credences upward (although they still remained closer to 0% than 50%). Does that imply the person giving the early warning was better-calibrated than the people who moved their estimates up? I don’t think so. And I think—in this nuclear case—some analysis can be used to justify the view that the person giving the early warning was probably overconfident; they probably didn’t have enough evidence or good enough arguments to actually justify a 50% credence.
It may still be the case that the person giving the early warning (in the hypothetical nuclear case) had some valuable and neglected insights, missed by others, that are well worth paying attention to and seriously reflecting on; but that’s a different matter from believing they were overall well-calibrated or should be deferred to much more than the people who moved.
[[EDIT: Something else it might be worth emphasizing, here, is that I’m not arguing for the view “ignore Eliezer.” It’s closer to “don’t give Eliezer’s views outsized weight, compared to (e.g.) the views of the next dozen people you might be inclined to defer to, and factor in evidence that his risk estimates might have a significant upward bias to them.”]]
I definitely do agree with that!
It’s possible I should have emphasized the significance of it more in the post, rather than moving on after just a quick mention at the top.
If it’s of interest: I say a little more about how I think about this, in response to Gwern’s comment below. (To avoid thread-duplicating, people might want to respond there rather than here if they have follow-on thoughts on this point.) My further comment is: