A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to “read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.
To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?
Update (2017-10-28): I just realized that the kind of challenge I’m raising here has been carried out, in the form of a “natural experiment”, for Eliezer’s views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer’s TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).
Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):
I admit I don’t know as much about economics as some of you, but I am working off of a poll of the country’s best economists who came down pretty heavily on the side of this not significantly increasing growth. If you want to tell me that it would, your job isn’t to explain Economics 101 theories to me even louder, it’s to explain how the country’s best economists are getting it wrong.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
This seems correct. I just noticed you could phrase this the other way—why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn’t find Greg’s post particularly persuasive—this isn’t a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).
why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?
Maybe because these people have been surprisingly accurate? In addition, it’s not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to “read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.
To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?
Update (2017-10-28): I just realized that the kind of challenge I’m raising here has been carried out, in the form of a “natural experiment”, for Eliezer’s views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer’s TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).
Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):
This seems correct. I just noticed you could phrase this the other way—why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn’t find Greg’s post particularly persuasive—this isn’t a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).
Maybe because these people have been surprisingly accurate? In addition, it’s not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.
Yeah, that’s the sort of discussion that seems to me most relevant.