This was a really good read! In addition to being super well-timed.
I don’t think there’s a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:
In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.
In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)
How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.
What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it’s too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual’s state and situation, rather than through rules akin to “be less confident” or “be more confident”.
What’s the track record for more modesty-like views versus less modesty-like views overall?
What’s the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.
Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:
I think the Eliezer-style ‘immodest’ view comprises two key claims:
1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.
2) We can reliably identify these cases.
If they’re both true we can license ourselves to ‘pick fights’ where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting ‘as if’ our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.
It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:
First, when domain experts look at the ‘answer according to the rationalist community re. X’, they’re usually very unimpressed, even if they’re sympathetic to the view themselves. I’m pretty Atheist, but I find the ‘answer’ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with ‘the rationalist community is right and this field is diseased (and so gets it wrong)’ and ‘the rationalist community is greatly over confident and the field ia on the right track’. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.
The best thing would be a clear track record to judge—single cases, either way, don’t give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer’s book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn’t lead me to upgrade my pretty autumnal view of the track record.
when domain experts look at the ‘answer according to the rationalist community re. X’, they’re usually very unimpressed, even if they’re sympathetic to the view themselves. I’m pretty Atheist, but I find the ‘answer’ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
I would love to see better evidence about this. Eg it doesn’t match my experience of talking to physicists.
I’m pretty Atheist, but I find the ‘answer’ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field.
This will be a pertinent critique if the aim of LessWrong is to be a skeptics forum, created to make the most canonical debunkings (serving a societal purpose akin to Snopes). It seems much less relevant if you are trying to understand the world, unless you maybe have a very strong intuition or evidence that sophistication is highly correlated with truth.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).
I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I’d be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.
If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts’ views and toolkits in general can be slow to change, particularly in areas like philosophy.
I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes’ accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won’t instantly snap into a new state that’s maximally consistent with all of the world’s newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.
This is why Eliezer doesn’t claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.
I’d consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)
You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.
I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can’t provide this. On many of these topics (QM, p-zombies) I don’t pretend any great knowledge; for others (e.g. Theism), I can’t exactly find the ‘rationalist case for Atheism’ crisply presented.
I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.
I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI’s stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.
This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.
To be clear, I’m not saying that the story I told above (“here are some cool ideas that I claim haven’t sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed”) should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I’m just gesturing at one reason why I think it’s possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.
I’m also implicitly making a claim about there being similarities between many of the domains you’re pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that’s unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn’t) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren’t widely known among philosophers who endorse naturalism) to new domains.
I’m curious what criticisms you’ve heard of MIRI’s work on decision theory. Is there anything relevant you can link to?
I don’t think the account of the relative novelty of the ‘LW approach’ to philosophy makes a good fit for the available facts; “relatively” new is, I suggest, a pretty relative term.
You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently ‘big name’ in philosophy that his approach was at least widely appreciated by the relevant academic communities.
This is challenging to reconcile with an account of “Rationality’s philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it”. Closely analogous approaches have been tried a very long time ago, and haven’t been found extraordinarily persuasive (even if we subset to naturalists). It doesn’t help that when the ‘LW-answer’ is expounded (e.g. in the sequences) the argument offered isn’t particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.
I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.
Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I’ve heard often target ‘technical quality’: Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist’s view is that the work also isn’t rigorous and a bit sloppy (on Carl’s advice, I’m trying to contact more). Not being a decision theorist myself, I haven’t delved into the object level considerations.
Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didn’t grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where I’m standing, and one I’d like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just “the mind is like software”, though computationalism is an important start. A more specific question might be: “Why didn’t E.T. Jaynes’ work sweep the philosophical community?”
I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.
Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an “amazing bet-losing capability” and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won all his bets). Carl Shulman notes that Eliezer’s approach “has lead [him] astray repeatedly, but I haven’t seen as many successes.”
and Carl Shulman notes that his approach “has lead [him] astray repeatedly, but I haven’t seen as many successes.”
That quote may not convey my view, so I’ll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn’t mean one can’t look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.
If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don’t understand.
In the case of MWI, the physicists say they don’t have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to ‘don’t talk about whether parts of the wave function we don’t see are real.’
Physicists working on quantum foundations are mostly mutually aware of one another’s arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.
In terms of modesty, I would say that generally ‘trying to answer the question about external reality’ is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.
In contrast, I hadn’t seen anything like such due diligence regarding nutrition, or precedent in common law.
Regarding the OP thesis, you could summarize my stance as that assigning ‘epistemic peer’ or ‘epistemic superior/inferior’ status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of ‘scientific realist arguments’ has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.
In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn’t mean one can’t look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.
I offered that quote to cast doubt on Rob’s assertion that Eliezer has “a really strong epistemic track record, and that this is good evidence that modesty is a bad idea.” I didn’t mean to deny that Eliezer had some successes, or that one shouldn’t “look closely and form much better estimates of the likelihood of good invisible reasons” or that “the base rate of dysfunction is anywhere near zero”, and I didn’t offer the quote to dispute those claims.
Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.
This was a really good read! In addition to being super well-timed.
I don’t think there’s a disagreement here about ideal in-principle reasoning. I’m guessing that the disagreement is about several different points:
In reality, how generally difficult is it to spot important institutions and authorities failing in large ways? Where we might ask subquestions for particular kinds of groups; e.g., maybe you and the anti-modest will turn out to agree about how dysfunctional US national politics is on average, while disagreeing about how dysfunctional academia is on average in the US.
In reality, how generally difficult is it to evaluate your own level of object-level accuracy in some domain, the strength of object-level considerations in that domain, your general competence or rationality or meta-rationality, etc.? To what extent should we update strongly on various kinds of data about our reasoning ability, vs. distrusting the data source and penalizing the evidence? (Or looking for ways to not have to gather or analyze data like that at all, e.g., prioritizing finding epistemic norms or policies that work relatively OK without such data.)
How strong are various biases, either in general or in our environs? It sounds like you think that arrogance, overconfidence, and excess reliance on inside-view arguments are much bigger problems for core EAs than underconfidence or neglect of inside-view arguments, while Eliezer thinks the opposite.
What are the most important and useful debiasing interventions? It sounds like you think these mostly look like attempts to reduce overconfidence in inside views, self-aggrandizing biases, and the like, while Eliezer thinks that it’s too easy to overcorrect if you organize your epistemology around that goal. I think the anti-modesty view here is that we should mostly address those biases (and other biases) through more local interventions that are sensitive to the individual’s state and situation, rather than through rules akin to “be less confident” or “be more confident”.
What’s the track record for more modesty-like views versus less modesty-like views overall?
What’s the track record for critics of modesty in particular? I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea. So I assume it would help to discuss the object-level disagreements underlying these diverging generalizations.
Does that match your sense of the disagreement?
Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:
I think the Eliezer-style ‘immodest’ view comprises two key claims:
1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.
2) We can reliably identify these cases.
If they’re both true we can license ourselves to ‘pick fights’ where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting ‘as if’ our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.
It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:
First, when domain experts look at the ‘answer according to the rationalist community re. X’, they’re usually very unimpressed, even if they’re sympathetic to the view themselves. I’m pretty Atheist, but I find the ‘answer’ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with ‘the rationalist community is right and this field is diseased (and so gets it wrong)’ and ‘the rationalist community is greatly over confident and the field ia on the right track’. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.
The best thing would be a clear track record to judge—single cases, either way, don’t give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezer’s book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesn’t lead me to upgrade my pretty autumnal view of the track record.
I would love to see better evidence about this. Eg it doesn’t match my experience of talking to physicists.
This will be a pertinent critique if the aim of LessWrong is to be a skeptics forum, created to make the most canonical debunkings (serving a societal purpose akin to Snopes). It seems much less relevant if you are trying to understand the world, unless you maybe have a very strong intuition or evidence that sophistication is highly correlated with truth.
I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. I’d be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse / non-realism views in QM.
If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, experts’ views and toolkits in general can be slow to change, particularly in areas like philosophy.
I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynes’ accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields won’t instantly snap into a new state that’s maximally consistent with all of the world’s newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.
This is why Eliezer doesn’t claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.
I’d consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)
You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.
I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I can’t provide this. On many of these topics (QM, p-zombies) I don’t pretend any great knowledge; for others (e.g. Theism), I can’t exactly find the ‘rationalist case for Atheism’ crisply presented.
I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.
I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRI’s stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.
This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.
To be clear, I’m not saying that the story I told above (“here are some cool ideas that I claim haven’t sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbed”) should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. I’m just gesturing at one reason why I think it’s possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.
I’m also implicitly making a claim about there being similarities between many of the domains you’re pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework that’s unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldn’t) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/reductionist toolkit (containing concepts and formalisms that aren’t widely known among philosophers who endorse naturalism) to new domains.
I’m curious what criticisms you’ve heard of MIRI’s work on decision theory. Is there anything relevant you can link to?
I don’t think the account of the relative novelty of the ‘LW approach’ to philosophy makes a good fit for the available facts; “relatively” new is, I suggest, a pretty relative term.
You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently ‘big name’ in philosophy that his approach was at least widely appreciated by the relevant academic communities.
This is challenging to reconcile with an account of “Rationality’s philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of it”. Closely analogous approaches have been tried a very long time ago, and haven’t been found extraordinarily persuasive (even if we subset to naturalists). It doesn’t help that when the ‘LW-answer’ is expounded (e.g. in the sequences) the argument offered isn’t particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.
I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.
Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks I’ve heard often target ‘technical quality’: Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theorist’s view is that the work also isn’t rigorous and a bit sloppy (on Carl’s advice, I’m trying to contact more). Not being a decision theorist myself, I haven’t delved into the object level considerations.
The “Cheating Death in Damascus” and “Functional Decision Theory” papers came out in March and October, so I recommend sharing those, possibly along with the “Decisions Are For Making Bad Outcomes Inconsistent” conversation notes. I think these are much better introductions than e.g. Eliezer’s old “Timeless Decision Theory” paper.
Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didn’t grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where I’m standing, and one I’d like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just “the mind is like software”, though computationalism is an important start. A more specific question might be: “Why didn’t E.T. Jaynes’ work sweep the philosophical community?”
Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an “amazing bet-losing capability” and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won all his bets). Carl Shulman notes that Eliezer’s approach “has lead [him] astray repeatedly, but I haven’t seen as many successes.”
That quote may not convey my view, so I’ll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn’t mean one can’t look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.
If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don’t understand.
In the case of MWI, the physicists say they don’t have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to ‘don’t talk about whether parts of the wave function we don’t see are real.’
Physicists working on quantum foundations are mostly mutually aware of one another’s arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.
In terms of modesty, I would say that generally ‘trying to answer the question about external reality’ is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.
In contrast, I hadn’t seen anything like such due diligence regarding nutrition, or precedent in common law.
Regarding the OP thesis, you could summarize my stance as that assigning ‘epistemic peer’ or ‘epistemic superior/inferior’ status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of ‘scientific realist arguments’ has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.
I offered that quote to cast doubt on Rob’s assertion that Eliezer has “a really strong epistemic track record, and that this is good evidence that modesty is a bad idea.” I didn’t mean to deny that Eliezer had some successes, or that one shouldn’t “look closely and form much better estimates of the likelihood of good invisible reasons” or that “the base rate of dysfunction is anywhere near zero”, and I didn’t offer the quote to dispute those claims.
Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.
Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.
Okay, thank you for the clarification.
[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]