Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:
I think the Eliezer-style âimmodestâ view comprises two key claims:
1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.
2) We can reliably identify these cases.
If theyâre both true we can license ourselves to âpick fightsâ where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting âas ifâ our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.
It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:
First, when domain experts look at the âanswer according to the rationalist community re. Xâ, theyâre usually very unimpressed, even if theyâre sympathetic to the view themselves. Iâm pretty Atheist, but I find the âanswerâ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with âthe rationalist community is right and this field is diseased (and so gets it wrong)â and âthe rationalist community is greatly over confident and the field ia on the right trackâ. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.
The best thing would be a clear track record to judgeâsingle cases, either way, donât give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezerâs book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesnât lead me to upgrade my pretty autumnal view of the track record.
when domain experts look at the âanswer according to the rationalist community re. Xâ, theyâre usually very unimpressed, even if theyâre sympathetic to the view themselves. Iâm pretty Atheist, but I find the âanswerâ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
I would love to see better evidence about this. Eg it doesnât match my experience of talking to physicists.
Iâm pretty Atheist, but I find the âanswerâ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field.
This will be a pertinent critique if the aim of LessWrong is to be a skeptics forum, created to make the most canonical debunkings (serving a societal purpose akin to Snopes). It seems much less relevant if you are trying to understand the world, unless you maybe have a very strong intuition or evidence that sophistication is highly correlated with truth.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon).
I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. Iâd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse /â non-realism views in QM.
If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, expertsâ views and toolkits in general can be slow to change, particularly in areas like philosophy.
I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynesâ accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields wonât instantly snap into a new state thatâs maximally consistent with all of the worldâs newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.
This is why Eliezer doesnât claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.
Iâd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)
You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.
I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I canât provide this. On many of these topics (QM, p-zombies) I donât pretend any great knowledge; for others (e.g. Theism), I canât exactly find the ârationalist case for Atheismâ crisply presented.
I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.
I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRIâs stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.
This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.
To be clear, Iâm not saying that the story I told above (âhere are some cool ideas that I claim havenât sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbedâ) should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. Iâm just gesturing at one reason why I think itâs possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.
Iâm also implicitly making a claim about there being similarities between many of the domains youâre pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework thatâs unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldnât) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/âreductionist toolkit (containing concepts and formalisms that arenât widely known among philosophers who endorse naturalism) to new domains.
Iâm curious what criticisms youâve heard of MIRIâs work on decision theory. Is there anything relevant you can link to?
I donât think the account of the relative novelty of the âLW approachâ to philosophy makes a good fit for the available facts; ârelativelyâ new is, I suggest, a pretty relative term.
You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently âbig nameâ in philosophy that his approach was at least widely appreciated by the relevant academic communities.
This is challenging to reconcile with an account of âRationalityâs philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of itâ. Closely analogous approaches have been tried a very long time ago, and havenât been found extraordinarily persuasive (even if we subset to naturalists). It doesnât help that when the âLW-answerâ is expounded (e.g. in the sequences) the argument offered isnât particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.
I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.
Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks Iâve heard often target âtechnical qualityâ: Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theoristâs view is that the work also isnât rigorous and a bit sloppy (on Carlâs advice, Iâm trying to contact more). Not being a decision theorist myself, I havenât delved into the object level considerations.
Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didnât grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where Iâm standing, and one Iâd like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just âthe mind is like softwareâ, though computationalism is an important start. A more specific question might be: âWhy didnât E.T. Jaynesâ work sweep the philosophical community?â
Thanks for your helpful reply. I think your bullet points do track the main sources of disagreement, but I venture an even crisper summary:
I think the Eliezer-style âimmodestâ view comprises two key claims:
1) There are a reasonably large number of cases that due to inadequate equiilbria or similar that who we might take to be expert classes are in fact going to be sufficiently poorly optimised for the truth that the views a reasonable rationalist or similar could be expected to do better.
2) We can reliably identify these cases.
If theyâre both true we can license ourselves to âpick fightsâ where we make confident bets against expert consensus (or lack thereof) in the knowledge we are more likely than not to be right. If not, then it seems modesty is the better approach: it might be worth acting âas ifâ our contra-expert impression is right and doing further work (because we might discover something important), but nonetheless defer to the expert consensus.
It seems the best vindication of the immodesty view as Eliezer defends is a track record of such cases on his behalf or the wider rationalist community. You correctly anticipate I would definitely include the track record here as highly adverse. For two reasons:
First, when domain experts look at the âanswer according to the rationalist community re. Xâ, theyâre usually very unimpressed, even if theyâre sympathetic to the view themselves. Iâm pretty Atheist, but I find the âanswerâ to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.
Unfortunately, many of these questions tend to be the sort where a convincing adjudication is far off (i.e. it seems unlikely to discover convincing proof of physicalism any time soon). So what we observe is both compatible with âthe rationalist community is right and this field is diseased (and so gets it wrong)â and âthe rationalist community is greatly over confident and the field ia on the right trackâ. That said, I take the number of fields which the rationalist community takes to be sufficiently diseased that it takes itself to do better as implausible on priors.
The best thing would be a clear track record to judgeâsingle cases, either way, donât give much to go one, as neither modesty nor immodesty would claim they should expect to win every single time. I see the rationalist community having one big win (re. AI), yet little else. That Eliezerâs book offers two pretty weak examples (e.g. BoJ, where he got the argument from a recognised authority, and an n=1 medical intervention), and reports one case against (e.g. a big bet of Taubes) doesnât lead me to upgrade my pretty autumnal view of the track record.
I would love to see better evidence about this. Eg it doesnât match my experience of talking to physicists.
This will be a pertinent critique if the aim of LessWrong is to be a skeptics forum, created to make the most canonical debunkings (serving a societal purpose akin to Snopes). It seems much less relevant if you are trying to understand the world, unless you maybe have a very strong intuition or evidence that sophistication is highly correlated with truth.
I think a convincing object-level argument could be given; you could potentially show on object-level grounds why the specific arguments or conclusions of various rationalists are off-base, thereby at least settling the issue (or certain sub-issues) to the satisfaction of people who take the relevant kinds of inside-view arguments sufficiently seriously in the first place. Iâd be particularly interested to hear reasons you (or experts you defer to) reject the relevant arguments against gods, philosophical zombies, or objective collapse /â non-realism views in QM.
If you mean that a convincing expert-consensus argument is likely to be far off, though, then I agree about that. As a start, expertsâ views and toolkits in general can be slow to change, particularly in areas like philosophy.
I assume one part of the model Eliezer is working with here is that it can take many decades for new conceptual discoveries to come to be widely understood, accepted, and used in a given field, and even longer for these ideas to spill over into other fields. E.g., some but not all philosophers have a deep understanding of Shannon, Solomonoff, and Jaynesâ accounts of inductive inference, even though many of the key insights have been around for over fifty years at this point. When ideas spread slowly, consensus across all fields wonât instantly snap into a new state thatâs maximally consistent with all of the worldâs newest developments, and there can be low-hanging fruit for the philosophers who do help import those ideas into old discussions.
This is why Eliezer doesnât claim uniqueness for his arguments in philosophy; e.g., Gary Drescher used the same methodology and background ideas to arrive largely at the same conclusions largely independently, as far as I know.
Iâd consider the big advances in decision theory from Wei Dai and Eliezer to be a key example of this, and another good example of independent discovery of similar ideas by people working with similar methodologies and importing similar ideas into a relatively old and entrenched field. (Though Wei Dai and Eliezer were actively talking to each and sharing large numbers of ideas, so the independence is much weaker.)
You can find most of the relevant component ideas circulating before that, too; but they were scattered across multiple fields in a way that made them less likely to get spontaneously combined by specialists busy hashing out the standard sub-sub-arguments within old paradigms.
I agree such an object level demonstration would be good evidence (although of course one-sided, for reasons Pablo ably articulates elsewhere). I regret I canât provide this. On many of these topics (QM, p-zombies) I donât pretend any great knowledge; for others (e.g. Theism), I canât exactly find the ârationalist case for Atheismâ crisply presented.
I am naturally hesitant to infer from the (inarguable) point that diffusion of knowledge and ideas within and across fields takes time that he best explanation for disagreement is that rationalists are just ahead of the curve. I enjoyed the small parts of Drescher I read, but I assume many reasonable philosophers are aware of his work and yet are not persuaded. Many things touted in philosophy (and elsewhere) as paradigm shifting insights transpire to be misguided, and betting on some based on your personal assent on the object level looks unlikely to go well.
I consider the decision theory work a case-in-point. The view that the F- U- T- DT is this great advance on decision theoretic state of the art is a view that is very tightly circumscribed to the rationalist community itself. Of course, many decision theorists are simply ignorant of it given it is expounded outside the academic press. Yet others are not: there were academic decision theorists who attend some MIRI workshops, others who have been shown versions (via Chalmers, I understand), and a few who have looked at MIRIâs stuff on Arxiv and similar. Yet the prevailing view of these seems to be at best lukewarm, and at worst scathing.
This seems challenging to reconcile with a model of rationalists just getting to the great insights early before everyone else catches up. It could be the decision theorist community is so diseased so they cannot appreciate the technical breakthrough MIRI-style decision theory promises. Yet I find the alternative hypothesis where it is the rationalist community which is diseased and diving down a decision theory dead end without the benefit of much interaction with decision theory experts to correct them somewhat more compelling.
To be clear, Iâm not saying that the story I told above (âhere are some cool ideas that I claim havenât sufficiently saturated the philosophy community to cause all the low-hanging fruit to get grabbed, or to produce fieldwide knowledge and acceptance in the cases where it has been grabbedâ) should persuade arbitrary readers that people like Eliezer or Gary Drescher are on the right track; plenty of false turns and wrong solutions can also claim to be importing neglected ideas, or combining ideas in neglected ways. Iâm just gesturing at one reason why I think itâs possible at all to reach confident correct beliefs about lots of controversial claims in philosophy, in spite of the fact that philosophy is a large and competitive field whose nominal purpose is to answer these kinds of questions.
Iâm also implicitly making a claim about there being similarities between many of the domains youâre pointing to that help make it not just a coincidence that one (relatively) new methodology and set of ideas can put you ahead of the curve on multiple issues simultaneously (plus produce multiple discovery and convergence). A framework thatâs unusually useful for answering questions related to naturalism, determinism, and reflective reasoning can simultaneously have implications for how we should (and shouldnât) be thinking about experience, agency, volition, decision theory, and AI, among other topics. To some extent, all of these cases can be thought of as applications of a particular naturalist/âreductionist toolkit (containing concepts and formalisms that arenât widely known among philosophers who endorse naturalism) to new domains.
Iâm curious what criticisms youâve heard of MIRIâs work on decision theory. Is there anything relevant you can link to?
I donât think the account of the relative novelty of the âLW approachâ to philosophy makes a good fit for the available facts; ârelativelyâ new is, I suggest, a pretty relative term.
You can find similar reduction-esque sensibilities among the logicial positivists around a century ago, and a very similar approach from Quine about half a century ago. In the case of the logical positivists, they enjoyed a heyday amongst the philosophical community, but gradually fell from favour due to shortcomings other philosophers identified; I suggest Quine is a sufficiently âbig nameâ in philosophy that his approach was at least widely appreciated by the relevant academic communities.
This is challenging to reconcile with an account of âRationalityâs philosophical framework allows one to get to confidently get to the right answer across a range of hard philosophical problems, and the lack of assent of domain experts is best explained by not being aware of itâ. Closely analogous approaches have been tried a very long time ago, and havenât been found extraordinarily persuasive (even if we subset to naturalists). It doesnât help that when the âLW-answerâ is expounded (e.g. in the sequences) the argument offered isnât particularly sophisticated (and often turns out to be recapitulating extant literature), nor does it usually deign to address objections raised by dissenting camps.
I suggest a better fit for this data is the rationality approach looks particularly persuasive to people without subject matter expertise.
Re. decision theory. Beyond the general social epistemiological steers (i.e. the absence of good decision theorists raving about the breakthrough represented by MIRI style decision theory, despite many of them having come into contact with this work one way or another), remarks Iâve heard often target âtechnical qualityâ: Chalmers noted in a past AMA disappointment this theory had not been made rigorous (maybe things have changed since), and I know one decision theoristâs view is that the work also isnât rigorous and a bit sloppy (on Carlâs advice, Iâm trying to contact more). Not being a decision theorist myself, I havenât delved into the object level considerations.
The âCheating Death in Damascusâ and âFunctional Decision Theoryâ papers came out in March and October, so I recommend sharing those, possibly along with the âDecisions Are For Making Bad Outcomes Inconsistentâ conversation notes. I think these are much better introductions than e.g. Eliezerâs old âTimeless Decision Theoryâ paper.
Quineans and logical positivists have some vague attitudes in common with people like Drescher, but the analogy seems loose to me. If you want to ask why other philosophers didnât grab all the low-hanging fruit in areas like decision theory or persuade all their peers in areas like philosophy of mind (which is an interesting set of questions from where Iâm standing, and one Iâd like to see examined more too), I think a more relevant group to look at will be technically minded philosophers who think in terms of Bayesian epistemology (and information-theoretic models of evidence, etc.) and software analogies. In particular, analogies that are more detailed than just âthe mind is like softwareâ, though computationalism is an important start. A more specific question might be: âWhy didnât E.T. Jaynesâ work sweep the philosophical community?â