Against Modest Epistemology
Previous: Blind Empiricism
Modest epistemology doesn’t need to reflect a skepticism about causal models as such. It can manifest instead as a wariness about putting weight down on one’s own causal models, as opposed to others’.
In 1976, Robert Aumann demonstrated that two ideal Bayesian reasoners with the same priors cannot have common knowledge of a disagreement. Tyler Cowen and Robin Hanson have extended this result, establishing that even under various weaker assumptions, something has to go wrong in order for two agents with the same priors to get stuck in a disagreement.1 If you and a trusted peer don’t converge on identical beliefs once you have a full understanding of one another’s positions, at least one of you must be making some kind of mistake.
If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact. To become more rational, then, shouldn’t we set aside our claims to special knowledge or insight and modestly profess that, really, we’re all in the same boat?
When I’m trying to sort out questions like these, I often find it useful to start with a related question: “If I were building a brain from scratch, would I have it act this way?”
If I were building a brain and I expected it to have some non-fatal flaws in its cognitive algorithms, I expect that I would have it spend some of its time using those flawed reasoning algorithms to think about the world; and I would have it spend some of its time using those same flawed reasoning algorithms to better understand its reasoning algorithms. I would have the brain spend most of its time on object-level problems, while spending some time trying to build better meta-level models of its own cognition and how its cognition relates to its apparent success or failure on object-level problems.
If the thinker is dealing with a foreign cognitive system, I would want the thinker to try to model the other agent’s thinking and predict the degree of accuracy this system will have. However, the thinker should also record the empirical outcomes, and notice if the other agent’s accuracy is more or less than expected. If particular agents are more often correct than its model predicts, the system should recalibrate its estimates so that it won’t be predictably mistaken in a known direction.
In other words, I would want the brain to reason about brains in pretty much the same way it reasons about other things in the world. And in practice, I suspect that the way I think, and the way I’d advise people in the real world to think, works very much like that:
-
Try to spend most of your time thinking about the object level. If you’re spending more of your time thinking about your own reasoning ability and competence than you spend thinking about Japan’s interest rates and NGDP, or competing omega-6 vs. omega-3 metabolic pathways, you’re taking your eye off the ball.
-
Less than a majority of the time: Think about how reliable authorities seem to be and should be expected to be, and how reliable you are—using your own brain to think about the reliability and failure modes of brains, since that’s what you’ve got. Try to be evenhanded in how you evaluate your own brain’s specific failures versus the specific failures of other brains.2 While doing this, take your own meta-reasoning at face value.
-
… and then next, theoretically, should come the meta-meta level, considered yet more rarely. But I don’t think it’s necessary to develop special skills for meta-meta reasoning. You just apply the skills you already learned on the meta level to correct your own brain, and go on applying them while you happen to be meta-reasoning about who should be trusted, about degrees of reliability, and so on. Anything you’ve already learned about reasoning should automatically be applied to how you reason about meta-reasoning.3
-
Consider whether someone else might be a better meta-reasoner than you, and hence that it might not be wise to take your own meta-reasoning at face value when disagreeing with them, if you have been given strong local evidence to this effect.
That probably sounded terribly abstract, but in practice it means that everything plays out in what I’d consider to be the obvious intuitive fashion.
i.
Once upon a time, my colleague Anna Salamon and I had a disagreement. I thought—this sounds really stupid in retrospect, but keep in mind that this was without benefit of hindsight—I thought that the best way to teach people about detaching from sunk costs was to write a script for local Less Wrong meetup leaders to carry out exercises, thus enabling all such meetups to be taught how to avoid sunk costs. We spent a couple of months trying to write this sunk costs unit, though a lot of that was (as I conceived of it) an up-front cost to figure out the basics of how a unit should work at all.
Anna was against this. Anna thought we should not try to carefully write a unit. Anna thought we should just find some volunteers and improvise a sunk costs teaching session and see what happened.
I explained that I wasn’t starting out with the hypothesis that you could successfully teach anti-sunk-cost reasoning by improvisation, and therefore I didn’t think I’d learn much from observing the improvised version fail. This may sound less stupid if you consider that I was accustomed to writing many things, most of which never worked or accomplished anything, and a very few of which people paid attention to and mentioned later, and that it had taken me years of writing practice to get even that far. And so, to me, negative examples seemed too common to be valuable. The literature was full of failed attempts to correct for cognitive biases—would one more example of that really help?
I tried to carefully craft a sunk costs unit that would rise above the standard level (which was failure), so that we would actually learn something when we ran it (I reasoned). I also didn’t think up-front that it would be two months to craft; the completion time just kept extending gradually—beware the planning fallacy!—and then at some point we figured we had to run what we had.
As read by one of the more experienced meetup leaders, the script did not work. It was, by my standards, a miserable failure.
Here are three lessons I learned from that experiment.
The first lesson is to not carefully craft anything that it was possible to literally just improvise and test immediately in its improvised version, ever. Even if the minimum improvisable product won’t be representative of the real version. Even if you already expect the current version to fail. You don’t know what you’ll learn from trying the improvised version.4
The second lesson was that my model of teaching rationality by producing units for consumption at meetups wasn’t going to work, and we’d need to go with Anna’s approach of training teachers who could fail on more rapid cycles, and running centralized workshops using those teachers.
The third thing I learned was to avoid disagreeing with Anna Salamon in cases where we would have common knowledge of the disagreement.
What I learned wasn’t quite as simple as, “Anna is often right.” Eliezer is also often right.
What I learned wasn’t as simple as, “When Anna and Eliezer disagree, Anna is more likely to be right.” We’ve had a lot of first-order disagreements and I haven’t particularly been tracking whose first-order guesses are right more often.
But the case above wasn’t a first-order disagreement. I had presented my reasons, and Anna had understood and internalized them and given her advice, and then I had guessed that in a situation like this I was more likely to be right. So what I learned is, “Anna is sometimes right even when my usual meta-reasoning heuristics say otherwise,” which was the real surprise and the first point at which something like an extra push toward agreement is additionally necessary.
It doesn’t particularly surprise me if a physicist knows more about photons than I do; that’s a case in which my usual meta-reasoning already predicts the physicist will do better, and I don’t need any additional nudge to correct it. What I learned from that significant multi-month example was that my meta-rationality—my ability to judge which of two people is thinking more clearly and better integrating the evidence in a given context—was not particularly better than Anna’s meta-rationality. And that meant the conditions for something like Cowen and Hanson’s extension of Aumann’s agreement theorem were actually being fulfilled. Not pretend ought-to-be fulfilled, but actually fulfilled.
Could adopting modest epistemology in general have helped me get the right answer in this case? The versions of modest epistemology I hear about usually involve deference to the majority view, to the academic mainstream, or to publicly recognized elite opinion. Anna wasn’t a majority; there were two of us, and nobody else in particular was party to the argument. Neither of us were part of a mainstream. And at the point in time where Anna and I had that disagreement, any outsider would have thought that Eliezer Yudkowsky had the more impressive track record at teaching rationality. Anna wasn’t yet heading CFAR. Any advice to follow track records, to trust externally observable eliteness in order to avoid the temptation to overconfidence, would have favored listening to Yudkowsky over Salamon—that’s part of the reason I trusted myself over her in the first place! And then I was wrong anyway, because in real life that is allowed to happen even when one person has more externally observable status than another.
Whereupon I began to hesitate to disagree with Anna, and hesitate even more if she had heard out my reasons and yet still disagreed with me.
I extend a similar courtesy to Nick Bostrom, who recognized the importance of AI alignment three years before I did (as I discovered afterwards, reading through one of his papers). Once upon a time I thought Nick Bostrom couldn’t possibly get anything done in academia, and that he was staying in academia for bad reasons. After I saw Nick Bostrom successfully found his own research institute doing interesting things, I concluded that I was wrong to think Bostrom should leave academia—and also meta-wrong to have been so confident while disagreeing with Nick Bostrom. I still think that oracle AI (limiting AI systems to only answer questions) isn’t a particularly useful concept to study in AI alignment, but every now and then I dust off the idea and check to see how much sense oracles currently make to me, because Nick Bostrom thinks they might be important even after knowing that I’m more skeptical.
There are people who think we all ought to behave this way toward each other as a matter of course. They reason:
a) on average, we can’t all be more meta-rational than average; and
b) you can’t trust the reasoning you use to think you’re more meta-rational than average. After all, due to Dunning-Kruger, a young-Earth creationist will also think they have plausible reasoning for why they’re more meta-rational than average.
… Whereas it seems to me that if I lived in a world where the average person on the street corner were Anna Salamon or Nick Bostrom, the world would look extremely different from how it actually does.
… And from the fact that you’re reading this at all, I expect that if the average person on the street corner were you, the world would again look extremely different from how it actually does.
(In the event that this book is ever read by more than 30% of Earth’s population, I withdraw the above claim.)
ii.
I once poked at someone who seemed to be arguing for a view in line with modest epistemology, nagging them to try to formalize their epistemology. They suggested that we all treat ourselves as having a black box receiver (our brain) which produces a signal (opinions), and treat other people as having other black boxes producing other signals. And we all received our black boxes at random—from an anthropic perspective of some kind, where we think we have an equal chance of being any observer. So we can’t start out by believing that our signal is likely to be more accurate than average.
But I don’t think of myself as having started out with the a priori assumption that I have a better black box. I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments.
To which the modest reply is: “Oh, but any crackpot could say that their personal epistemology is better because it’s based on a bunch of stuff that they think is cool. What makes you different?”
Or as someone advocating what I took to be modesty recently said to me, after I explained why I thought it was sometimes okay to give yourself the discretion to disagree with mainstream expertise when the mainstream seems to be screwing up, in exactly the following words: “But then what do you say to the Republican?”
Or as Ozy Brennan puts it, in dialogue form:
Becoming Sane Side: “Hey! Guys! I found out how to take over the world using only the power of my mind and a toothpick.”
Harm Reduction Side: “You can’t do that. Nobody’s done that before.”
Becoming Sane Side: “Of course they didn’t, they were completely irrational.”
Harm Reduction Side: “But they thought they were rational, too.”
Becoming Sane Side: “The difference is that I’m right.”
Harm Reduction Side: “They thought that, too!”
This question, “But what if a crackpot said the same thing?”, I’ve never heard formalized—though it seems clearly central to the modest paradigm.
My first and primary reply is that there is a saying among programmers: “There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code.”
This is known as Flon’s Law.
The lesson of Flon’s Law is that there is no point in trying to invent a programming language which can coerce programmers into writing code you approve of, because that is impossible.
The deeper message of Flon’s Law is that this kind of defensive, adversarial, lock-down-all-the-doors, block-the-idiots-at-all-costs thinking doesn’t lead to the invention of good programming languages. And I would say much the same about epistemology for humans.
Probability theory and decision theory shouldn’t deliver clearly wrong answers. Machine-specified epistemology shouldn’t mislead an AI reasoner. But if we’re just dealing with verbal injunctions for humans, where there are degrees of freedom, then there is nothing we can say that a hypothetical crackpot could not somehow misuse. Trying to defend against that hypothetical crackpot will not lead us to devise a good system of thought.
But again, let’s talk formal epistemology.
So far as probability theory goes, a good Bayesian ought to condition on all of the available evidence. E. T. Jaynes lists this as a major desideratum of good epistemology—that if we know A, B, and C, we ought not to decide to condition only on A and C because we don’t like where B is pointing. If you’re trying to estimate the accuracy of your epistemology, and you know what Bayes’s Rule is, then—on naive, straightforward, traditional Bayesian epistemology—you ought to condition on both of these facts, and estimate P(accuracyknow_Bayes) instead of P(accuracy). Doing anything other than that opens the door to a host of paradoxes.
The convergence that perfect Bayesians exhibit on factual questions doesn’t involve anyone straying, even for a moment, from their individual best estimate of the truth. The idea isn’t that good Bayesians try to make their beliefs more closely resemble their political rivals’ so that their rivals will reciprocate, and it isn’t that they toss out information about their own rationality. Aumann agreement happens incidentally, without any deliberate push toward consensus, through each individual’s single-minded attempt to reason from their own priors to the hypotheses that best match their own observations (which happen to include observations about other perfect Bayesian reasoners’ beliefs).
Modest epistemology seems to me to be taking the experiments on the outside view showing that typical holiday shoppers are better off focusing on their past track record than trying to model the future in detail, and combining that with the Dunning-Kruger effect, to argue that we ought to throw away most of the details in our self-observation. At its epistemological core, modesty says that we should abstract up to a particular very general self-observation, condition on it, and then not condition on anything else because that would be inside-viewing. An observation like, “I’m familiar with the cognitive science literature discussing which debiasing techniques work well in practice, I’ve spent time on calibration and visualization exercises to address biases like base rate neglect, and my experience suggests that they’ve helped,” is to be generalized up to, “I use an epistemology which I think is good.” I am then to ask myself what average performance I would expect from an agent, conditioning only on the fact that the agent is using an epistemology that they think is good, and not conditioning on that agent using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning or anything in particular.
Only in this way can we force Republicans to agree with us… or something. (Even though, of course, anyone who wants to shoot off their own foot will actually just reject the whole modest framework, so we’re not actually helping anyone who wants to go astray.)
Whereupon I want to shrug my hands helplessly and say, “But given that this isn’t normative probability theory and I haven’t seen modesty advocates appear to get any particular outperformance out of their modesty, why go there?”
I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty.
That said, let me go on beyond my true rejection and try to construct something of a reductio. Two reductios, actually.
The first reductio is just, as I asked the person who proposed the signal-receiver epistemology: “Okay, so why don’t you believe in God like a majority of people’s signal receivers tell them to do?”
“No,” he replied. “Just no.”
“What?” I said. “You’re allowed to say ‘just no’? Why can’t I say ‘just no’ about collapse interpretations of quantum mechanics, then?”
This is a serious question for modest epistemology! It seems to me that on the signal-receiver interpretation you have to believe in God. Yes, different people believe in different Gods, and you could claim that there’s a majority disbelief in every particular God. But then you could as easily disbelieve in quantum mechanics because (you claim) there isn’t a majority of physicists that backs any particular interpretation. You could disbelieve in the whole edifice of modern physics because no exactly specified version of that physics is agreed on by a majority of physicists, or for that matter, by a majority of people on Earth. If the signal-receiver argument doesn’t imply that we ought to average our beliefs together with the theists and all arrive at an 80% probability that God exists, or whatever the planetary average is, then I have no idea how the epistemological mechanics are supposed to work. If you’re allowed to say “just no” to God, then there’s clearly some level—object level, meta level, meta-meta level—where you are licensed to take your own reasoning at face value, despite a majority of other receivers getting a different signal.
But if we say “just no” to anything, even God, then we’re no longer modest. We are faced with the nightmare scenario of having granted ourselves discretion about when to disagree with other people, a discretionary process where we take our own reasoning at face value. (Even if a majority of others disagree about this being a good time to take our own beliefs at face value, telling us that reasoning about the incredibly deep questions of religion is surely the worst of all times to trust ourselves and our pride.) And then what do you say to the Republican?
And if you give people the license to decide that they ought to defer, e.g., only to a majority of members of the National Academy of Sciences, who mostly don’t believe in God; then surely the analogous license is for theists to defer to the true experts on the subject, their favorite priesthood.
The second reductio is to ask yourself whether a superintelligent AI system ought to soberly condition on the fact that, in the world so far, many agents (humans in psychiatric wards) have believed themselves to be much more intelligent than a human, and they have all been wrong.
Sure, the superintelligence thinks that it remembers a uniquely detailed history of having been built by software engineers and raised on training data. But if you ask any other random agent that thinks it’s a superintelligence, that agent will just tell you that it remembers a unique history of being chosen by God. Each other agent that believes itself to be a superintelligence will forcefully reject any analogy to the other humans in psychiatric hospitals, so clearly “I forcefully reject an analogy with agents who wrongly believe themselves to be superintelligences” is not sufficient justification to conclude that one really is a superintelligence. Perhaps the superintelligence will plead that its internal experiences, despite the extremely abstract and high-level point of similarity, are really extremely dissimilar in the details from those of the patient in the psychiatric hospital. But of course, if you ask them, the psychiatric patient could just say the same thing, right?
I mean, the psychiatric patient wouldn’t say that, the same way that a crackpot wouldn’t actually give a long explanation of why they’re allowed to use the inside view. But they could, and according to modesty, That’s Terrible.
iii.
To generalize, suppose we take the following rule seriously as epistemology, terming it Rule M for Modesty:
Rule M: Let X be a very high-level generalization of a belief subsuming specific beliefs X1, X2, X3.… For example, X could be “I have an above-average epistemology,” X1 could be “I have faith in the Bible, and that’s the best epistemology,” X2 could be “I have faith in the words of Mohammed, and that’s the best epistemology,” and X3 could be “I believe in Bayes’s Rule, because of the Dutch Book argument.” Suppose that all people who believe in any Xi, taken as an entire class X, have an average level F of fallibility. Suppose also that most people who believe some Xi also believe that their Xi is not similar to the rest of X, and that they are not like most other people who believe some X, and that they are less fallible than the average in X. Then when you are assessing your own expected level of fallibility you should condition only on being in X, and compute your expected fallibility as F. You should not attempt to condition on being in X3 or ask yourself about the average fallibility you expect from people in X3.
Then the first machine superintelligence should conclude that it is in fact a patient in a psychiatric hospital. And you should believe, with a probability of around 33%, that you are currently asleep.
Many people, while dreaming, are not aware that they are dreaming. Many people, while dreaming, may believe at some point that they have woken up, while still being asleep. Clearly there can be no license from “I think I’m awake” to the conclusion that you actually are awake, since a dreaming person could just dream the same thing.
Let Y be the state of not thinking that you are dreaming. Then Y1 is the state of a dreaming person who thinks this, and Y2 is the state of actually being awake. It boots nothing, on Rule M, to say that Y2 is introspectively distinguishable from Y1 or that the inner experiences of people in Y2 are actually quite different from those of people in Y1. Since people in Y1 usually falsely believe that they’re in Y2, you ought to just condition on being in Y, not condition on being in Y2. Therefore you should assign a 67% probability to currently being awake, since 67% of observer-moments who believe they’re awake are actually awake.
Which is why—in the distant past, when I was arguing against the modesty position for the first time—I said: “Those who dream do not know they dream, but when you are awake, you know you are awake.” The modest haven’t formalized their epistemology very much, so it would take me some years past this point to write down the Rule M that I thought was at the heart of the modesty argument, and say that “But you know you’re awake” was meant to be a reductio of Rule M in particular, and why. Reasoning under uncertainty and in a biased and error-prone way, still we can say that the probability we’re awake isn’t just a function of how many awake versus sleeping people there are in the world; and the rules of reasoning that let us update on Bayesian evidence that we’re awake can serve that purpose equally well whether or not dreamers can profit from using the same rules. If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.
Cross-posted to Less Wrong and equilibriabook.com. Next: Status Regulation and Anxious Underconfidence.
-
See Cowen and Hanson, “Are Disagreements Honest?” ↩
-
This doesn’t mean the net estimate of who’s wrong comes out 50-50. It means that if you rationalized last Tuesday then you expect yourself to rationalize this Tuesday, if you would expect the same thing of someone else after seeing the same evidence. ↩
-
And then the recursion stops here, first because we already went in a loop, and second because in practice nothing novel happens after the third level of any infinite recursion. ↩
-
Chapter 22 of my Harry Potter fanfiction, Harry Potter and the Methods of Rationality, was written after I learned this lesson. ↩
Hi Eliezer, I wonder if you’ve considered trying to demonstrate the superiority of your epistemic approach by participating in one of the various forecasting tournaments funded by IARPA, and trying to be classified as a ‘superforecaster’. For example the new Hybrid Forecasting Competition is actively recruiting participants.
To me your advice seems in tension with the recommendations that have come out of that research agenda (via Tetlock and others) which finds forecasts carefully aggregated from many people perform better than almost any individuals—and individuals that beat the aggregation were almost always lucky and can’t repeat the feat. I’d be interested to see how an anti-modest approach fares in direct quantified competition with alternatives.
It would be understandable if you didn’t think that was the best use of your time, in which case perhaps some others who endorse and practice the mindset you recommend could find the time to do it instead.
I think with Eliezer’s approach, superforecasters should exist, and it should be possible to be aware that you are a superforecaster. Those both seem like they would be lower probability under the modest view. Whether Eliezer personally is a superforecaster seems about as relevant as whether Tetlock is one; you don’t need to be a superforecaster to study them.
I expect Eliezer to agree that a careful aggregation of superforecasters will outperform any individual superforecaster; similarly, I expect Eliezer to think that a careful aggregation of anti-modest reasoners will outperform any individual anti-modest reasoner.
It’s worth considering what careful aggregations look like when not dealing with binary predictions. The function of a careful aggregation is to disproportionately silence error while maintaining signal. With many short-term binary predictions, we can use methods that focus on the outcomes, without any reference to how those predictors are estimating those outcomes. With more complicated questions, we can’t compare outcomes directly, and so need to use the reasoning processes themselves as data.
That suggests a potential disagreement to focus on: the anti-modest view suspects that one can do a careful aggregation based on reasoner methodology (say, weighing more highly forecasters who adjust their estimates more frequently, or who report using Bayes, or so on), whereas I think the modest view suspects that this won’t outperform uniform aggregation.
(The modest view has two components—approving of weighting past performance, and disapproving of other weightings. Since other approaches can agree on the importance of past performance, and the typical issues where the two viewpoints differ are those where we have little data on past performance, it seems more relevant to focus on whether the disapproval is correct than whether the approval is correct.)
OK so it seems like the potential areas of disagreement are:
How much external confirmation do you need to know that you’re a superforecaster (or have good judgement in general), or even the best forecaster?
How narrowly should you define the ‘expert’ group?
How often should you define who is a relevant expert based on whether you agree with them in that specific case?
How much should you value ‘wisdom of the crowd (of experts)’ against the views of the one best person?
How much to follow a preregistered process to whatever conclusion it leads to, versus change the algorithm as you go to get an answer that seems right?
We’ll probably have to go through a lot of specific cases to see how much disagreement there actually is. It’s possible to talk in generalities and feel you disagree, but actually be pretty close on concrete cases.
Note that it’s entirely possible that non-modest contributors will do more to enhance the accuracy of a forecasting tournament because they try harder to find errors, but less right than others’ all-things-considered views, because of insufficient deference to the answer the tournament as a whole spits out. Active traders enhance market efficiency, but still lose money as a group.
As for Eliezer knowing how to make good predictions, but not being able to do it himself, that’s possible (though it would raise the question of how he has gotten strong evidence that these methods work). But as I understand it, Eliezer regards himself as being able to do unusually well using the techniques he has described, and so would predict his own success in forecasting tournaments.
This is also my model of Eliezer; my point is that my thoughts on modesty / anti-modesty are mostly disconnected to whether or not Eliezer is right about his forecasting accuracy, and mostly connected to the underlying models of how modesty and anti-modesty work as epistemic positions.
I want to repeat something to make sure there isn’t confusion or double illusion of transparency; “narrowness” doesn’t mean just the size of the group but also the qualities that are being compared to determine who’s expert and who isn’t.
It’s an interesting just so story about what IARPA has to say about epistemology, but the actual story is much more complicated. For instance, the fact that “Extremizing” works to better calibrate general forecasts, but that extremizing of superforecaster’s predictions makes them worse.
Furthermore, that contrary to what you seem to be claiming about people not being able to outperform others, there are in fact “superforecasters” who out perform the average participant year after year, even if they can’t outperform the aggregate when their forecasts are factored in.
Not sure how this is a ‘just so story’ in the sense that I understand the term.
“the fact that “Extremizing” works to better calibrate general forecasts, but that extremizing of superforecaster’s predictions makes them worse.”
How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.
I know very well about superforecasters (I’ve read all of his books and interviewed Tetlock last week), but I am pretty sure an aggregation of superforecasters beats almost all of them individually, which speaks to the benefits of averaging a range of people’s views in most cases. Though in many cases you should not give much weight to those who are clearly in a worse epistemic position (non-superforecasters, whose predictions Tetlock told me were about 10-30x less useful).
Doesn’t this clearly demonstrate that the superforecasters are not using modest epistemology? At best, this shows that you can improve upon a “non-modest” epistemology by aggregating them together, but does not argue against the original post.
Hi Halffull—now I see what you’re saying, but actually the reverse is true. That superforecasters have already extremised shows their higher levels of modesty. Extremising is about updating based on other people’s views, and realising that because they have independent information to add, after hearing their view, you can be more confident of where to shift from your prior.
Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person’s estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.
Once they’ve done that, there won’t be gains from further extremising. But a non-humble participant would fail to properly extremise based on the information in the other person’s view, leaving accuracy to be gained if this is done at a later stage by someone running the forecasting tournament.
This is what I’m talking about when I say “jut so stories” about the data from the GJP. One explanation is that superforecasters are going through this thought process, another would be that they discard non-superforecasters’ knowledge, and therefore end up as more extreme without explicitly running the extremizing algorithm on their own forecasts.
Similarly, the existence of super-forecasters themselves argues for a non-modest epistemology, while the fact that the extremized aggregation beats the superforecasters may argue for somewhat of a more modest epistemology. Saying that the data here points one way or the other to my mind is cherrypicking.
″...the existence of super-forecasters themselves argues for a non-modest epistemology...”
I don’t see how. No theory on offer argues that everyone is an epistemic peer. All theories predict some people have better judgement and will be reliably able to produce better guesses.
As a result I think superforecasters should usually pay little attention to the predictions of non-superforecasters (unless it’s a question on which expertise pays few dividends).
I think what modesty proponents are really doing is not generalizing endlessly, but identifying a relatively wide generalization, something like “people who have academically studied this field” or “people who share the basic tenets of rational inquiry”, and sticking with it.