Iâm not sure I can argue for this, but it feels weird and off-putting to me that all this energy is being spent discussing how good a track-record one guy has, especially one guy with a very charismatic and assertive writing-style, and a history of attempting to provide very general guidance for how to think across all topics (though I guess any philosophical theory of rationality does the last thing.) It just feels like a bad sign to me, though that could just be for dubious social reasons.
The question of how much to defer to E.Y. isnât answered just by things like âhe has possibly the best track record in the world on this issue.â If heâs out of step with other experts, and by a long way, we need to have reason to think he outperforms the aggregate of experts before we weight him more than the aggregate and itâs entirely normal, Iâd have thought, for the aggregate to significantly outperform the single best individual. (Iâm not making as strong a claim as that the best individual outperforming the aggregate is super-unusual and unlikely.) Of course if you think heâs nearly as good as the aggregate, then you should still move a decent amount in his direction. But even that is quite a strong claim that goes beyond him being in the handful of individuals with the best track record.
It strikes me that some of the people criticizing this post on the grounds that actually E.Y. has a great track record keep citing âheâs been right that there is significant X-risk from A.I., when almost everyone else missed thatâ for a couple of reasons.
Firstly, this isnât actually a prediction that has been resolved as correct in any kind of unambiguous way. Sure, a lot of very smart people in the EA community now agree. (And I agree the risk is worth assigning EA resources to as well, to be clear.) But we should be wary of substituting the judgment of the community that a prediction looks rational, for a track record of predictions that have actually resolved successfully in my view. (I think the later is better evidence than the former in most cases.)
Secondly, I feel like E.Y. being right about the importance of A.I.-risk is actually not very surprising, conditional on the key assumption here about E.Y. that Ben is relying on in telling people to be cautious about the probabilities and timelines that E.Y. gives for A.I. doom, but that even given this, IF Benâs assumption is correct itâs still a good reason to doubt E.Y.âs p(doom). Suppose, as is being alleged here, someone has a general bias, for whatever reasons towards the view that doom from some technological source or other is likely and imminent. Does that make it especially surprising that that individual finds an important source of doom most people have missed? Not especially that I can see: sure they will be less rational on the topic perhaps, but a) a bias towards p(doom) wbeing high doesnât necessarily imply being poor ranking sources of doom-risk by relative importance, and b) there is probably a counter-effect where bias towards doom makes you more likely to find underrated doom-risks, because you spend more time looking. Of course, finding a doom-risk larger than most others that approx. everyone had missed would still be a very impressive achievement. But the question Benâs addressing isnât âis E.Y. a smart person with insights about A.I. risk?â but rather âhow much should we update on E.Y.âs views about p(near-term A.I. doom)?â Suppose significant bias towards doom is genuinely evidenced by E.Y.âs earlier nanotech prediction (which to be fair is only 1 data point) and a good record at identifying neglected important doom sources is only weak evidence that E.Y. lacks the bias. Then weâd be right to only update a little towards doom, even if E.Y.âs record on A.I. risk was impressive in some ways.
Some things that arenât said in this post or any comments in here yet:
The issue isnât at all about 15-20 year old content, itâs about very recent content and events (mostly publicly visible)
In addition to this recent, publicly visible content, there are several latent issues or effects that directly affect progress in the relevant cause area
To calibrate, this could be slowing things down by 10 times or more, in what is supposed to be the most important cause area in EA and whose effects are supposed to happen very soon
Certain comments here do not at all contain all of the relevant content, because laying them out risks damaging an entire cause area.
Certain commentors may feel personally restricted from doing for a variety of complex reasons (âmoral mazesâ) and the content they are presenting is a âsecond bestâ option
The above interacts poorly with the customs and practices around discourse and criticism
These in totality have become sort of an odious and out of space specter, invisible to people who a lot of spend time here
For all I know, you maybe right or not (insofar as I follow whatâs being insinuated), but whilst I freely admit that l, like anyone who wants to work in EA, have self-interested incentives to not be too critical of Eliezer, there is no specific secret âlatent issueâ that I personally am aware of and consciously avoiding talking about. Honest.
Several thoughts:
Iâm not sure I can argue for this, but it feels weird and off-putting to me that all this energy is being spent discussing how good a track-record one guy has, especially one guy with a very charismatic and assertive writing-style, and a history of attempting to provide very general guidance for how to think across all topics (though I guess any philosophical theory of rationality does the last thing.) It just feels like a bad sign to me, though that could just be for dubious social reasons.
The question of how much to defer to E.Y. isnât answered just by things like âhe has possibly the best track record in the world on this issue.â If heâs out of step with other experts, and by a long way, we need to have reason to think he outperforms the aggregate of experts before we weight him more than the aggregate and itâs entirely normal, Iâd have thought, for the aggregate to significantly outperform the single best individual. (Iâm not making as strong a claim as that the best individual outperforming the aggregate is super-unusual and unlikely.) Of course if you think heâs nearly as good as the aggregate, then you should still move a decent amount in his direction. But even that is quite a strong claim that goes beyond him being in the handful of individuals with the best track record.
It strikes me that some of the people criticizing this post on the grounds that actually E.Y. has a great track record keep citing âheâs been right that there is significant X-risk from A.I., when almost everyone else missed thatâ for a couple of reasons.
Firstly, this isnât actually a prediction that has been resolved as correct in any kind of unambiguous way. Sure, a lot of very smart people in the EA community now agree. (And I agree the risk is worth assigning EA resources to as well, to be clear.) But we should be wary of substituting the judgment of the community that a prediction looks rational, for a track record of predictions that have actually resolved successfully in my view. (I think the later is better evidence than the former in most cases.)
Secondly, I feel like E.Y. being right about the importance of A.I.-risk is actually not very surprising, conditional on the key assumption here about E.Y. that Ben is relying on in telling people to be cautious about the probabilities and timelines that E.Y. gives for A.I. doom, but that even given this, IF Benâs assumption is correct itâs still a good reason to doubt E.Y.âs p(doom). Suppose, as is being alleged here, someone has a general bias, for whatever reasons towards the view that doom from some technological source or other is likely and imminent. Does that make it especially surprising that that individual finds an important source of doom most people have missed? Not especially that I can see: sure they will be less rational on the topic perhaps, but a) a bias towards p(doom) wbeing high doesnât necessarily imply being poor ranking sources of doom-risk by relative importance, and b) there is probably a counter-effect where bias towards doom makes you more likely to find underrated doom-risks, because you spend more time looking. Of course, finding a doom-risk larger than most others that approx. everyone had missed would still be a very impressive achievement. But the question Benâs addressing isnât âis E.Y. a smart person with insights about A.I. risk?â but rather âhow much should we update on E.Y.âs views about p(near-term A.I. doom)?â Suppose significant bias towards doom is genuinely evidenced by E.Y.âs earlier nanotech prediction (which to be fair is only 1 data point) and a good record at identifying neglected important doom sources is only weak evidence that E.Y. lacks the bias. Then weâd be right to only update a little towards doom, even if E.Y.âs record on A.I. risk was impressive in some ways.
Some things that arenât said in this post or any comments in here yet:
The issue isnât at all about 15-20 year old content, itâs about very recent content and events (mostly publicly visible)
In addition to this recent, publicly visible content, there are several latent issues or effects that directly affect progress in the relevant cause area
To calibrate, this could be slowing things down by 10 times or more, in what is supposed to be the most important cause area in EA and whose effects are supposed to happen very soon
Certain comments here do not at all contain all of the relevant content, because laying them out risks damaging an entire cause area.
Certain commentors may feel personally restricted from doing for a variety of complex reasons (âmoral mazesâ) and the content they are presenting is a âsecond bestâ option
The above interacts poorly with the customs and practices around discourse and criticism
These in totality have become sort of an odious and out of space specter, invisible to people who a lot of spend time here
For all I know, you maybe right or not (insofar as I follow whatâs being insinuated), but whilst I freely admit that l, like anyone who wants to work in EA, have self-interested incentives to not be too critical of Eliezer, there is no specific secret âlatent issueâ that I personally am aware of and consciously avoiding talking about. Honest.
I am grateful for your considerate comment and your reply. I had no belief or thought about dishonesty.
Maybe I should have added[1]:
âthis is for onlookersâ
âthis is trying to rationalize/âexplain why this post exists, that has 234 karma and 156 votes, yet only talks about high school stuff.â
I posted my comment because this situation is hurting onlookers and producing bycatch?
I donât really know what to do here (as a communications thing) and I have incentives not to be involved?
But this is sort of getting into the elliptical rhetoric and self-referential stuff, that is sort of related to the problem in the first place.