How modest should you be?
In this post, I discuss the extent to which we may appeal to the object-level reasons when forming beliefs. I argue that the object-level reasons should sometimes play a role in determining our credences, even if that is only via our choice of epistemic peers and superiors. Thanks to Gregory Lewis, Stefan Schubert, Aidan Goth, Stephen Clare and Johannes Ackva for comments and discussion.
1. Introduction
There was a lively discussion in EA recently about the case for and against epistemic modesty. On epistemic modesty, one’s all-things-considered credences (the probability one puts on different propositions) should be determined, not by consideration of the object-level arguments, but rather by the credences of one’s epistemic peers and superiors.
Suppose I am trying to form a judgement about the costs and benefits of free trade. On epistemic modesty, my credence should be determined, not by my own impression of the merits of the object-level arguments for and against free trade (regarding issues such as comparative advantage and the distributional effects of tariffs), but rather by what the median expert (e.g. the median trade economist) believes about free trade. Even if my impressions of the arguments lean Trumpian, epistemic modesty requires me to defer to the experts. There is thus a difference between (1) my own personal impressions and (2) the all-things-considered credences I ought to have.
In this way, epistemic modesty places restrictions on the extent to which agents may permissibly appeal to object-level reasons when forming credences. When we are deciding what to believe about an issue, meta-level (non-object level considerations) about the epistemic merits of the expert group are very important.[1] These meta-level considerations include:
Time: These putative experts have put significant time into thinking about the question
Ability: They are selected for high cognitive ability
Scrutiny: Their work has been subject to substantial scrutiny from their peers
Numbers: They are numerous.
On epistemic modesty, a strong argument for the view that we should defer to trade economists on trade is that they are smart people, who put a lot of time into thinking about the topic, have been subject to significant external scrutiny, and they are numerous, which produces a wisdom of crowds effect. These factors are major advantages that trade economists have over me with respect to this topic, which suggests that the aggregate of economists are >10x more likely to be correct than me. So, on this topic, deference seems like a reasonable strategy.
1.1. Do the object-level arguments matter at all?
However, on epistemic modesty, the object-level arguments are not completely irrelevant to the all-things-considered credences I ought to have. Before we get into why, it is useful to distinguish two types of object-level arguments:
People’s object-level arguments on propositions relevant to people’s epistemic virtue on p, not including object-level arguments about p.
People’s object-level arguments and credences about p.
Suppose that our aim is to assess some proposition, such as the efficacy of sleeping pills. When we are choosing peers and superiors on this question, people’s object-level reasoning on medicine in general is relevant to their epistemic virtue. If we learn that someone has good epistemics on medical questions and knows the literature well (picture a Bayesian health economist), then that would be a reason to upgrade their epistemic virtue on the question of whether sleeping pills work. If we learn that someone has in the past appealed to homeopathic arguments about medicine, then that would be a reason to discount their epistemic virtue on the efficacy of sleeping pills. Thus, 1 is relevant to people’s epistemic virtue, and so is relevant to our all-things-considered credences on p.
This point is worth emphasising: even on epistemic modesty, consideration of the object-level arguments can be important for your all-things-considered credence, although the effect is indirect and comes via peer selection.
However, on some strong versions of epistemic modesty, 2 is not relevant to our assessment of people’s epistemic virtue on p. We must select people’s epistemic virtue on a proposition p in advance of considering their object-level reasons and verdict on p. Having selected trade economists as my epistemic superior on the benefits of trade, I cannot demote them merely because their object-level arguments on trade seem implausible. Call this the Object-level Reasons Restriction
Object-level Reasons Restriction = Your own impressions of the object-level reasons regarding p cannot be used to determine people’s epistemic virtue on p.[2]
2. Problems with the Object-level Reasons Restriction
In this section, I will argue against the Object-level Reasons Restriction using an argument from symmetry: if type 1 reasons are admissible when evaluating epistemic virtue, then type 2 reasons are as well.
Suppose I am forming beliefs about the efficacy of sleeping pills. When I am choosing peers and superiors on this question, on epistemic modesty I am allowed to take people’s object-level reasoning on medicine in general: I can justifiably exclude homeopaths from my peer group, for example. But if I choose Geoff as my peer on the efficacy of sleeping pills and he starts making homeopathic arguments about them, then I am not allowed to demote him from my peer group. I see no reason for this asymmetry, so I see no reason to accept the Object-level Reasons Restriction. If the object-level arguments about medicine in general are relevant to one’s epistemic virtue on the efficacy of sleeping pills, then the object-level reasons about the efficacy of sleeping pills are also relevant.
Suppose I have selected Geoff and Hilda as equally good truth-trackers on the efficacy of sleeping pills, and I then get the information that Geoff appeals to homeopathic arguments. From a Bayesian point of view, this is clearly an update away from him being as good a truth-tracker on the efficacy of sleeping pills as Hilda.
I envision two main responses to this, one from epistemic egoism, and one from rule epistemology.
2.1. Epistemic egoism
Firstly, it might be argued that by downgrading Geoff’s epistemic virtue, I in effect put extra weight on my own beliefs merely because they are mine. Since we are both peers, I should treat each of us equally.
I don’t think this argument works. My argument for demoting Geoff is not “my belief that his reasoning is bad” but rather “his reasoning is bad, which I believe”.[3] These two putative reasons are different. One can see this by using the following counterfactual test. Imagine a hypothetical world where I believe that Geoff’s reasoning is bad, but I am wrong. Do I, in the actual world, believe that I should still demote Geoff in the imagined world? No I do not. If I am wrong, I should not demote. So, my reason for downgrading is not my belief that the reasoning is bad, but is rather the proposition that the reasoning is bad, which I believe. Therefore, there is no objectionable epistemic egoism here.
In the sleeping pills case, when I demote someone from my peer group because they appeal to homeopathy, my reason to demote them is not my belief that homeopathy is unscientific, it is the fact that homeopathy is unscientific.
2.2. Rule epistemology
Another possible response is to say that I do better by following the rule of not downgrading according to the object-level reasons, so I should follow that rule rather than trying to find the exceptions. My response to this is that I do even better by following that rule except when it comes to Geoff. There is a difference between trying (and maybe failing) to justifiably downgrade experts and justifiably downgrading experts. If it is asked - ‘why think you should downgrade in this case?’, then a good answer is just to refer to the object-level reasons. This is a good answer regarding sleeping pills, just as it is, as everyone agrees, regarding medical expertise in general.
It is true that the rule of demoting people on the basis of the object-level reasons is liable to abuse. A mercantilist could use this kind of reasoning to demote all trade economists to be his epistemic inferiors. However, the problem here is bad object-level reasoning and epistemic virtue assessment, not the plausibility of the Object-level Reasons Restriction. As we have seen, proponents of epistemic modesty agree that object-level reasons are sometimes admissible when we are assessing people’s epistemic virtue. For example, the Object-level Reasons Restriction would not rule out the mercantilist using his views on trade as a reason to demote economists on other questions in economics. This is also an error, but one that merely stems from mundane bad reasoning, which we have non-modest reasons to care about.
What is usually going on when mercantilists dismiss the expert consensus on trade is (1) they simply don’t understand the arguments for and against free trade; and (2) that many of them also simply do not know that there is such a strong expert consensus on trade. This is simply inept reasoning, not an argument for never considering the object-level arguments when picking one’s peers.
3. Where does this leave us?
I have argued that it is sometimes appropriate to appeal to the object-level arguments on p when deciding on people’s epistemic virtue on p. I illustrated this with (I hope) a relatively clear case involving a rogue homeopath.
The arguments here are potentially practically important. Any sophisticated assessment of some topic will involve appealing to the object-level reasons to give more weight to the views of some putative experts who appear to do equally well on the meta-level considerations.
This is the approach that GiveWell takes when assessing the evidence on interventions in global health. Their approach is not to just take the median view in the literature on some question, but rather to filter out certain parts of the literature on the basis of general methodological quality. In their post on Common Problems with Formal Evaluations, they say they are hesitant to use non-RCT evidence because of selection effects and publication bias.[4] In this way, object-level arguments play a role in filtering the body of evidence that GiveWell responds to. Nonetheless, their approach is still modest in the sense that the prevailing view found among studies of sufficiently high quality plays a major role in determining their all-things-considered view.
In his review of the effect of saving lives on fertility, David Roodman rules out certain types of evidence, such as cross-country regressions and some of the instrumental variables studies, and puts more weight on other quasi-experimental studies. In this way, his final credence is determined by the expert consensus as weighted by the object-level reasons, rather than the crude median of the aggregate of putative experts.
Continental philosophers—Foucauldians, Hegelians, Marxists, postmodernists—do pretty well on the meta-level considerations—time, general cognitive ability, scrutiny and numbers. But they do poorly at object-level reasoning. This is a good reason to give them less weight than analytic philosophers when forming credences about philosophy.[5]
The examples above count against an ‘anything goes’ approach to assessments of the object-level reasons; the object-level reasoning still has to be done well. David Roodman and GiveWell put incredible amounts of intellectual heft into filtering the evidence. Epistemic peerhood and superiority are relative, and Roodman and GiveWell set a high bar. The point here is just that the object-level reasons are sometimes admissible.
The object-level reasons and the meta-level considerations should each play a role in assessments of epistemic virtue. This can justify positions that seem immodest. Some examples that are top of mind for me:
We should put less weight on estimates of climate sensitivity that update from a uniform prior, as I argued here.
We should almost all ignore nutritional epidemiology and just follow an enlightened common sense prior on nutrition.
We should often not defer to experts who use non-Bayesian epistemology, where this might make a difference relative to the prevailing (eg frequentist or scientistic) epistemology. E.g. this arguably played a role in early mainstream expert scepticism towards face masks as a tool to prevent COVID transmission.
[1] I avoid calling these ‘outside view’ reasons because the object-level reasons might also be outside view/base rate-type reasons. It might be that the people who do well on the meta-level reasons ignore the ‘outside view’ reasons. Witness, for example, the performance of many political scientists in Tetlock’s experiments.
[2] This is how I interpret Greg Lewis’ account of epistemic modesty: “One rough heuristic for strong modesty is this: for any question, find the plausible expert class to answer that question (e.g. if P is whether to raise the minimum wage, talk to economists). If this class converges on a particular answer, believe that answer too. If they do not agree, have little confidence in any answer. Do this no matter whether one’s impression of the object level considerations that recommend (by your lights) a particular answer.”
[3] David Enoch, “Not Just a Truthometer: Taking Oneself Seriously (but Not Too Seriously) in Cases of Peer Disagreement,” Mind 119, no. 476 (January 10, 2010): sec. 7, https://doi.org/10.1093/mind/fzq070.
[4] I have some misgivings about the RCT-focus of GiveWell’s methodology, but I think the general approach of filtering good and bad studies on the basis of methodological quality is correct.
[5] I think a reasonable case could be made for giving near zero weight to the views of Continental philosophers on difficult philosophical topics.
This isn’t directly related to your point, but I think there are a number of practical issues with most attempts at epistemic modesty/deference, that theoretical approaches like this one do not adequately account for.
1) Misunderstanding of what experts actually mean. It is often easier to defer to a stereotype in your head than to fully understand an expert’s views, or a simple approximation thereof.
Dan Luu gives the example of SV investors who “defer” to economists on the issue of discrimination in competitive markets without actually understanding (or perhaps reading) the relevant papers.
In some of those cases, it’s plausible that you’d do better trusting the evidence of your own eyes/intuition over your attempts to understand experts.
2) Misidentifying the right experts. In the US, it seems like the educated public roughly believes that “anybody with a medical doctorate” is approximately the relevant expert class on questions as diverse as nutrition, the fluid dynamics of indoor air flow (if the airflow happens to carry viruses), and the optimal allocation of limited (medical) resources.
More generally, people often default to the closest high-status group/expert to them, without accounting for whether that group/expert is epistemically superior to other experts slightly further away in space or time.
2a) Immodest modesty.* As a specific case/extension of this, when someone identifies an apparent expert or community of experts to defer to, they risk (incorrectly) believing that they have deference (on this particular topic) “figured out” and thus choose not to update on either object- or meta- level evidence that they did not correctly identify the relevant experts. The issue may be exacerbated beyond “normal” cases of immodesty, if there’s a sufficiently high conviction that you are being epistemically modest!
3) Information lag. Obviously any information you receive is to some degree or another from the past, and has the risk of being outdated. Of course, this lag happens for all evidence you have. At the most trivial level, even sensory experience isn’t really in real-time. But I think it should be reasonable to assume that attempts to read expert claims/consensus is disproportionately likely to have a significant lag problem, compared to your own present evaluations of the object-level arguments.
4) Computational complexity in understanding the consensus. Trying to understand the academic consensus (or lack thereof) from the outside might be very difficult, to the point where establishing your own understanding from a different vantage point might be less time-consuming. Unlike 1), this presupposes that you are able to correctly understand/infer what the experts mean, just that it might not be worth the time to do so.
5) Community issues with groupthink/difficulty in separating out beliefs from action. In an ideal world, we make our independent assessments of a situation, report it to the community, in what Kant[1] calls the “public (scholarly) use of reason” and then defer to an all-things-considered epistemically modest view when we act on our beliefs in our private role as citizens.
However, in practice I think it’s plausibly difficult to separate out what you personally believe from what you feel compelled to act on. One potential issue with this is that a community that’s overly epistemically deferential will plausibly have less variation, and lower affordance for making mistakes.
--
*As a special case of that, people may be unusually bad at identifying the right experts when said experts happen to agree with their initial biases, either on the object-level or for meta-level reasons uncorrelated with truth (eg use similar diction, have similar cultural backgrounds, etc)
[1] ha!
This comment is great, strong-upvoted.
There are enough individual and practical considerations here (in both directions) that in many situations the actual thing I would advocate for is something like “work out what you would do with both approaches, check against results ‘without fear or favour’, and move towards whatever method is working best for you”.
Thanks for the compliment!
Yeah that makes sense! I think this is a generally good approach to epistemics/life.
I agree that lots of these considerations are important. On 2) especially, I agree that being epistemically modest doesn’t make things easy because choosing the right experts is a non-trivial task. One example of this is using AI researchers as the correct expert group on AGI timelines, which I have myself done in the past. AI researchers have shown themselves to be good at producing AI research, not at forecasting long-term AI trends, so it’s really unclear that this is the right way to be modest in this case.
On 4 also—I agree. I think coming to a sophisticated view will often involve deferring to some experts on specific sub-questions using different groups of experts. Like maybe you defer to climate science on what will happen to the climate, philosophers on how to think about future costs, economists on the best way forward, etc. Identifying the correct expert groups is not always straightforward.
Thanks for the reply! One thing you and AGB reminded me of that my original comment elided over is that some of these personal and “practical” considerations apply in both directions. For example for #4 there are many/most cases where understanding expert consensus is easier rather than harder than coming up with your own judgment.
It’d perhaps be interesting if people produced a list of the most important/common practical considerations in either direction, though ofc much of that will be specific to the individual/subject matter/specific situation.
Thanks John, I really enjoyed this (as I do basically everything you write). Two comments.
First, would this be a reasonable gloss on your position: “defer to the experts, except when you know what their reasoning is and can see where it’s gone wrong”? FWIW, this gloss seems exactly the right response to epistemic humility, taking a principle middle line between “always defer” and “never defer”.
Second, I know this is by-the-by to your central claim, but can you explain and/or give examples of where continental philosophers have done “poorly at object-level reasoning”? I am (obviously) very sympathetic to the conclusion, but you don’t supply any reasons for it.
It seems quite difficult to argue that a whole class of people engages in poor reasoning, unless membership of that class necessitates accepting something that is clearly false (e.g. one might claim Holocaust deniers all engage in poor reasoning). But I can’t think of anything that all continental philosophers subscribe to, in virtue in being continental philosophers, and hence I can’t think of anything they all sign up to and clearly displays poor reasoning.
Hi Michael, I’m blushing!
Yes I think that would be a reasonable view to believe, but my point here is just about what role the object-level reasons should play in our epistemics. I do think something like a middle way is the right path, though I don’t have a fully worked out theory. There is a good discussion of the topic here by Michael Huemer. I should note that I am generally very pro at least figuring out what the experts think about a topic in order to form reasonable views—the views of others should weigh heavily in our reasoning, especially given the widespread tendency to overconfidence. The idea of just ignoring all the object-level reasons seems wrong to me, however.
On my definition of continental philosophy, it is a form of philosophy that puts little to no value on clarity in writing. I think this is because the work of continental philosophers lacks substantive merit—when you have nothing to say, a good strategy is to be unclear; when you have no cards, all you can do is bluff. This leads to passages such as this from Hegel
Or this from Foucault
A central confusion for continental philosophers is acceptance of the ‘worst argument in the world’, which is that “We can know things only
as they are related to us
under our forms of perception
understanding insofar as they fall under our conceptual schemes,
from our cultural/economic perspective
insofar as they are formulated in language.
So, we cannot know things as they are in themselves.” This is a common argument at the basis of relativism of different kinds.
I think this is an interesting test case for epistemic modesty because from the outside, these people look a lot like experts. It is only by understanding some philosophy that you could reasonably discount their epistemic virtue.
I realize this is a total tangent to the point of your post, but I feel you’re giving short-shrift here to continental philosophy.
If it were only about writing style I’d say fair: continental philosophy has chosen a style of writing that resembles that used in other traditions to try to avoid over-simplifying and not compressing understanding down into just a few words that are easily misunderstood. Whereas you see unclear writing, I see a desperate attempt to say anything detailed about reality without accidentally pointing in the wrong direction.
This is not to say that there aren’t bad continental philosophers who hide behind this method to say nothing, but I think it’s unfair to complain about it just because it’s hard to understand and takes a lot of effort to suss out what is being said.
As to the central confusion you bring up, the unfortunate thing is that the worst argument in the world is technically correct, we can’t know things as they are in themselves, only as we perceive them to be, i.e. there is no view from nowhere. Where it’s wrong is thinking that just because we always know the world from some vantage point that trying to understanding anything is pointless and any belief is equally useful. It is can both be true that there is no objective way that things are and that some ways of trying to understand reality do better at helping us predict reality than others.
I think the confusion that the worst argument in the world immediately implies we can’t know anything useful comes from only seeing that the map is not itself the territory but not also seeing that the map is embedded in the territory (no Cartesian dualism).
Three more quick thoughts.
First, how does listening to your peers solve the problem of overconfidence? Surely all your peers are, on average, as overconfident as you? Not saying you need to have an answer, more thinking out loud.
Second, object-level reasons need to be in the story somewhere. What else are experts supposed to use to form their views—the opinions of existing experts? If experts can and must appeal to object-level reasons, it’s then unsettling to say non-experts can make no use of them.
Third, I agree those quotes are bananas. I’ve never really understood what continental philosophers take each other to be saying—it’s all gloriously unclear to me.
Thanks for writing this post. I found it interesting and I love that you suggest practical takeaways. Overall, my one-line takeaway is something like that suggested in Michael Plant’s comment: “defer to the experts, except those that seem to have poor epistemics or unreasonable object-level beliefs”.
It seems to me like the arguments presented in section 2 leaves us with a slightly weaker version of the Object-level Reasons Restriction, but still keeps us very constrained in our use of object-level considerations.
Let’s model experts as having a knowledge base (that includes broad beliefs like “homeopathy can’t work” and more detailed facts like particular ways in which serotonin interacts with melatonin) and some level of epistemic quality (how well they can derive new information from their knowledge base). I take your argument to basically be “we should consider their underlying knowledge base when assessing how much we should defer to them, and give a heavy penalty for unreasonable beliefs that relate to our object of inspection”.
An expert that believes in homeopathy has a wrong model of how medicine works. We know this because there is an expert consensus against homeopathy (sort of). This means that his deduction of our statement of interest would potentially be clouded by false facts and intuitions.
My point here is that it is not exactly what I’d describe as an object-level claim. Or at least, something far enough away that we can find a different set of experts to check against or that we might be experts in ourselves (so again, acting from modesty).