Yeah, I wasn’t saying that you were making a claim about Eliezer; I just wanted to highlight that he’s possibly making a stronger claim even than the one you’re warning against when you say “one should generally distrust one’s ability to ‘beat elite common sense’ even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance”.
If the claim is that we shouldn’t give much weight to the views of individuals and institutions that we shouldn’t expect to be closely aligned with the truth, this is something that hardly anyone would dispute.
I think the main two factual disagreements here might be “how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?” and “for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities’) competency, epistemic rationality, meta-rationality, etc.?” I don’t know whether you in particular would disagree with Eliezer on those claims, though it sounds like you may.
Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, AI timelines, philosophical zombies, population ethics, etc.—unless it is conjoined with an argument for thinking that his skepticism extends to the relevant community of experts in each of those fields.
Yeah, agreed. The “adequacy” level of those fields, and the base adequacy level of civilization as a whole, is one of the most important questions here.
Could you say more about what you have in mind by “confident pronouncements [about] AI timelines”? I usually think of Eliezer as very non-confident about timelines.
I think the main two factual disagreements here might be “how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?” and “for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities’) competency, epistemic rationality, meta-rationality, etc.?”
Thank you, this is extremely clear, and captures the essence of much of what’s going between Eliezer and his critics in this area.
Could you say more about what you have in mind by “confident pronouncements [about] AI timelines”? I usually think of Eliezer as very non-confident about timelines.
I had in mind forecasts Eliezer made many years ago that didn’t come to pass as well as his most recent bet with Bryan Caplan. But it’s a stretch to call these ‘confident pronouncements’, so I’ve edited my post and removed ‘AI timelines’ from the list of examples.
nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics
I haven’t looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes’ theories in the past. If anyone has links, I’d be interested to read more.
AFAIK Eliezer hasn’t published why he holds his views about animal consciousness, and I don’t know what he’s thinking there. I don’t have a strong view on whether he’s right (or whether he’s overconfident).
Concerning zombies: I think Eliezer is correct that the zombie argument can’t provide any evidence for the claim that we instantiate mental properties that don’t logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.
I’m happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I’m also happy to talk more about meta questions related to this issue, like, “If the argument above is correct, why hasn’t it convinced all philosophers of mind?” I don’t have super confident views on that question, but there are various obvious possibilities that come to mind.
Concerning QM: I think Eliezer’s correct that Copenhagen-associated views like “objective collapse” and “quantum non-realism” are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham’s razor. I’m happy to talk more about this too; I think the object-level discussions are important here.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to “read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.
To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?
Update (2017-10-28): I just realized that the kind of challenge I’m raising here has been carried out, in the form of a “natural experiment”, for Eliezer’s views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer’s TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).
Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):
I admit I don’t know as much about economics as some of you, but I am working off of a poll of the country’s best economists who came down pretty heavily on the side of this not significantly increasing growth. If you want to tell me that it would, your job isn’t to explain Economics 101 theories to me even louder, it’s to explain how the country’s best economists are getting it wrong.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
This seems correct. I just noticed you could phrase this the other way—why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn’t find Greg’s post particularly persuasive—this isn’t a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).
why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?
Maybe because these people have been surprisingly accurate? In addition, it’s not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.
Concerning QM: I think Eliezer’s correct that Copenhagen-associated views like “objective collapse” and “quantum non-realism” are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham’s razor. I’m happy to talk more about this too; I think the object-level discussions are important here.
I don’t think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.
The informed modest person would go, “You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side”. They would go on an do QM without worrying too much about what it all means.
Yeah, I’m not making claims about what modest positions think about this issue. I’m also not endorsing a particular solution to the question of where the Born rule comes from (and Eliezer hasn’t endorsed any solution either, to my knowledge). I’m making two claims:
QM non-realism and objective collapse aren’t true.
As a performative corollary, arguments about QM non-realism and objective collapse are tractable, even for non-specialists; it’s possible for non-specialists to reach fairly confident conclusions about those particular propositions.
I don’t think either of those claims should be immediately obvious to non-specialists who completely reject “try to ignore object-level arguments”-style modesty, but who haven’t looked much into this question. Non-modest people should initially assign at least moderate probability to both 1 and 2 being false, though I’m claiming it doesn’t take an inordinate amount of investigation or background knowledge to determine that they’re true.
(Edit re Will’s question below: In the QM sequence, what Eliezer means by “many worlds” is only that the wave-function formalism corresponds to something real in the external world, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.)
He endorses “many worlds” in the sense that he thinks the wave-function formalism corresponds to something real and mind-independent, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.
One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of.
[… W]hat does the integral over squared moduli have to do with anything? On a straight reading of the data, you would always find yourself in both blobs, every time. How can you find yourself in one blob with greater probability? What are the Born probabilities, probabilities of? Here’s the map—where’s the territory?
I don’t know. It’s an open problem. [...]
This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics. Everything else—everything else—obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.
Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I’ve adopted a moderately modest approach to QM since I’ve not touched it in a bit and I expect the debate has moved on a bit.
We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.
The question is what happens when you criticize it and don’t get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?
I’m curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).
I don’t think we should describe all instances of deference to any authority, all uses of the outside view, etc. as “modesty”. (I don’t know whether you’re doing that here; I just want to be clear that this at least isn’t what the “modesty” debate has traditionally been about.)
The question is what happens when you criticize it and don’t get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?
I don’t think there’s any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you’ve understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.
I’m curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).
In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we’d naively think it is, etc.), then I’d be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn’t answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers’ data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.
I also feel comfortable having lower probability in the existence of God than the average physicist does; and “physicists are the wrong kind of authority to defer to about God” isn’t the reasoning I go through to reach that conclusion.
I also feel comfortable having lower probability in the existence of God than the average physicist does; and “physicists are the wrong kind of authority to defer to about God” isn’t the reasoning I go through to reach that conclusion.
Out of curiosity, what is the reasoning you would go through to reach that conclusion?
In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we’d naively think it is, etc.), then I’d be interested to hear more details.
Heh, I’m in danger of getting nerd sniped into physics land, which would be a multiyear journey. I’m found myself trying to figure out whether the stories in this paper count as real macroscopic worlds or not (or hidden variables). And then I tried to figure out whether it matters or not.
I’m going to bow out here. I mainly wanted to point out that there are more possibilities than just believe in Copenhagen and believe in Everett.
Cool. Note the bet with Bryan Caplan was partly tongue-in-cheek; though it’s true Eliezer is currently relatively pessimistic about humanity’s chances.
From Eliezer on Facebook:
Key backstory: I made two major bets in 2016 and lost both of them, one bet against AlphaGo beating Lee Se-dol, and another bet against Trump winning the presidency. In both cases I was betting with the GJP superforecasters, but lost anyway.
Meanwhile Bryan won every one of his bets, again, including his bet that “Donald Trump will not concede the election by Saturday”.
So, to take advantage of Bryan’s amazing bet-winning capability and my amazing bet-losing capability, I asked Bryan if I could bet him that the world would be destroyed by 2030.
The generator of this bet wasn’t a strong epistemic stance, which seems important to emphasize because of the usual expectations involving public bets. BUT you may be licensed to draw conclusions from the fact that, when I was humorously imagining what I could get from exploiting this phenomenon, my System 1 thought that having the world not be destroyed before 2030 was the most it could reasonably ask.
Yeah, I wasn’t saying that you were making a claim about Eliezer; I just wanted to highlight that he’s possibly making a stronger claim even than the one you’re warning against when you say “one should generally distrust one’s ability to ‘beat elite common sense’ even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance”.
I think the main two factual disagreements here might be “how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?” and “for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities’) competency, epistemic rationality, meta-rationality, etc.?” I don’t know whether you in particular would disagree with Eliezer on those claims, though it sounds like you may.
Yeah, agreed. The “adequacy” level of those fields, and the base adequacy level of civilization as a whole, is one of the most important questions here.
Could you say more about what you have in mind by “confident pronouncements [about] AI timelines”? I usually think of Eliezer as very non-confident about timelines.
Thank you, this is extremely clear, and captures the essence of much of what’s going between Eliezer and his critics in this area.
I had in mind forecasts Eliezer made many years ago that didn’t come to pass as well as his most recent bet with Bryan Caplan. But it’s a stretch to call these ‘confident pronouncements’, so I’ve edited my post and removed ‘AI timelines’ from the list of examples.
Going back to your list:
I haven’t looked much at the nutrition or population ethics discussions, though I understand Eliezer mistakenly endorsed Gary Taubes’ theories in the past. If anyone has links, I’d be interested to read more.
AFAIK Eliezer hasn’t published why he holds his views about animal consciousness, and I don’t know what he’s thinking there. I don’t have a strong view on whether he’s right (or whether he’s overconfident).
Concerning zombies: I think Eliezer is correct that the zombie argument can’t provide any evidence for the claim that we instantiate mental properties that don’t logically supervene on the physical world. Updating on factual evidence is a special case of a causal relationship, and if instantiating some property P is causally impacting our physical brain states and behaviors, then P supervenes on the physical.
I’m happy to talk more about this, and I think questions like this are really relevant to evaluating the track record of anti-modesty positions, so this seems like as good a place as any for discussion. I’m also happy to talk more about meta questions related to this issue, like, “If the argument above is correct, why hasn’t it convinced all philosophers of mind?” I don’t have super confident views on that question, but there are various obvious possibilities that come to mind.
Concerning QM: I think Eliezer’s correct that Copenhagen-associated views like “objective collapse” and “quantum non-realism” are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham’s razor. I’m happy to talk more about this too; I think the object-level discussions are important here.
A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I’m here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn’t itself take a position on the issue by relying primarily on the inside view.
Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to “read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.
To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?
Update (2017-10-28): I just realized that the kind of challenge I’m raising here has been carried out, in the form of a “natural experiment”, for Eliezer’s views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer’s TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).
Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):
This seems correct. I just noticed you could phrase this the other way—why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth? I agree that this disagreement will come down to building detailed models of incentives in human organisations more than building inside views of each field (which is why I didn’t find Greg’s post particularly persuasive—this isn’t a matter of discussing rational bayesian agents, but of discussing the empirical incentive landscape we are in).
Maybe because these people have been surprisingly accurate? In addition, it’s not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.
Yeah, that’s the sort of discussion that seems to me most relevant.
I don’t think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.
The informed modest person would go, “You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side”. They would go on an do QM without worrying too much about what it all means.
Yeah, I’m not making claims about what modest positions think about this issue. I’m also not endorsing a particular solution to the question of where the Born rule comes from (and Eliezer hasn’t endorsed any solution either, to my knowledge). I’m making two claims:
QM non-realism and objective collapse aren’t true.
As a performative corollary, arguments about QM non-realism and objective collapse are tractable, even for non-specialists; it’s possible for non-specialists to reach fairly confident conclusions about those particular propositions.
I don’t think either of those claims should be immediately obvious to non-specialists who completely reject “try to ignore object-level arguments”-style modesty, but who haven’t looked much into this question. Non-modest people should initially assign at least moderate probability to both 1 and 2 being false, though I’m claiming it doesn’t take an inordinate amount of investigation or background knowledge to determine that they’re true.
(Edit re Will’s question below: In the QM sequence, what Eliezer means by “many worlds” is only that the wave-function formalism corresponds to something real in the external world, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.)
Huh, he seemed fairly confident about endorsing MWI in his sequence here
He endorses “many worlds” in the sense that he thinks the wave-function formalism corresponds to something real and mind-independent, and that this wave function evolves over time to yield many different macroscopic states like our “classical” world. I’ve heard this family of views called “(QM) multiverse” views to distinguish this weak claim from the much stronger claim that, e.g., decoherence on its own resolves the whole question of where the Born rule comes from.
From a 2008 post in the MWI sequence:
Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I’ve adopted a moderately modest approach to QM since I’ve not touched it in a bit and I expect the debate has moved on a bit.
We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.
The question is what happens when you criticize it and don’t get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?
I’m curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).
I don’t think we should describe all instances of deference to any authority, all uses of the outside view, etc. as “modesty”. (I don’t know whether you’re doing that here; I just want to be clear that this at least isn’t what the “modesty” debate has traditionally been about.)
I don’t think there’s any general answer to this. The right answer depends on the strength of the object-level arguments; on how much reason you have to think you’ve understood and gleaned the right take-aways from those arguments; on your model of the physics community and other relevant communities; on the expected information value of looking into the issue more; on how costly it is to seek different kinds of further evidence; etc.
In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we’d naively think it is, etc.), then I’d be interested to hear more details. If the idea is that there are ways to talk about the experimental data without committing ourselves to a claim about why the Born rule holds, then I agree with that, though it obviously doesn’t answer the question of why the Born rule holds. If the idea is that there are no facts of the matter outside of observers’ data, then I feel comfortable dismissing that view even if a non-negligible number of physicists turn out to endorse it.
I also feel comfortable having lower probability in the existence of God than the average physicist does; and “physicists are the wrong kind of authority to defer to about God” isn’t the reasoning I go through to reach that conclusion.
Out of curiosity, what is the reasoning you would go through to reach that conclusion?
Heh, I’m in danger of getting nerd sniped into physics land, which would be a multiyear journey. I’m found myself trying to figure out whether the stories in this paper count as real macroscopic worlds or not (or hidden variables). And then I tried to figure out whether it matters or not.
I’m going to bow out here. I mainly wanted to point out that there are more possibilities than just believe in Copenhagen and believe in Everett.
Cool. Note the bet with Bryan Caplan was partly tongue-in-cheek; though it’s true Eliezer is currently relatively pessimistic about humanity’s chances.
From Eliezer on Facebook: