Ingroup Deference

Epistemic status: yes. All about epistemics

Introduction

In principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good.
But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.

My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus. When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record.
This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:

  1. Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.

  2. Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want” with some conception of “the good”.

And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropy”[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging” (if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting philanthropic strategy accordingly.

But treating EA thought as generic may not be a good first approximation. Seeing the “EA consensus” be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is.
The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge” of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.) If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution of beliefs among people not involved in the EA community.
This is hardly a mind-blowing point, and I’m sure I’m not the first to explore it.[1] But hopefully I can say something useful about how far it goes and how to distinguish it from more suspicious arguments for ingroup deference.

Disagreement in the abstract

As stated above, the intuition we’re exploring—and ultimately rejecting—is that EAs shouldn’t defer to each other’s judgment any more than to anyone else’s on questions lacking consensus. To shed light where this intuition may have come from, and where it can go wrong, let’s start by reviewing some of the theory surrounding disagreement in the abstract.

One may start out with a probability distribution over some state space, learn something, and then change one’s probability distribution in light of what was learned. The first distribution is then one’s prior and the second is one’s posterior. A formula for going from a prior to a posterior in light of some new information is an updating rule. Bayes’ Rule is the most famous.
Someone’s uninformed prior is their ultimate prior over all possible states of the world: the thing they’re born with, before they get any information at all, and then open their eyes and begin updating from. Two people with different uninformed priors can receive the same information over the course of their lives, both always update their beliefs using the same rule (e.g. Bayes’), and yet have arbitrarily different beliefs at the end about anything they haven’t learned for certain.
Subjective Bayesianism is the view that what it means to be [epistemically] “rational” is simply to update from priors to posteriors using Bayes’ Rule. No uninformed prior is more or less rational than any other (perhaps subject to some mild restrictions). Objective Bayesianism adds the requirement that there’s only one uninformed prior it is rational to have. That is, it’s the view that rationality consists of having the rational uninformed prior at bottom and updating from priors to posteriors using Bayes’ Rule.
For simplicity, through the rest of this post, the term “prior” will always refer to an uninformed prior. We’ll never need to refer to any intermediate sort of prior. People will be thought of as coming to their current beliefs by starting with an [uninformed] prior and then updating, once, on everything they’ve ever learned.
Common knowledge is defined here. Two people have a common prior if they have common knowledge that they have the same prior. So: the condition that two people have common knowledge that they are rational in the Objective Bayesian sense is essentially equivalent to the condition that they have (a) common knowledge that they are rational in the Subjective Bayesian sense and (b) a common prior. Common knowledge may seem like an unrealistically strong assumption for any context, but I believe everything I will say will hold approximately on replacing common knowledge with common p-belief, as defined by Monderer and Samet (1989).
For simplicity, throughout the rest of this post, the term “rationality” will always refer to epistemic rationality in the Subjective Bayesian sense. This is not to take a stand for Subjective Bayesianism; indeed, as you’ll see, this post is written from something of an Objective position. But it will let us straightforwardly refer to assumption (a), common knowledge that everyone updates using Bayes’ Rule, as CKR (“common knowledge of rationality”), and to assumption (b) as CP (“common priors”).
Finally, people will be said to “disagree” about an event if their probabilities for it differ and they have common knowledge of whose is higher. Note that the common knowledge requirement makes this definition of “disagree” stronger than the standard-usage definition: you might disagree with, say, Trump about something in the standard-usage sense, but not in the sense used here, assuming he doesn’t know what you think about it at all.
As it turns out, if a pair of people have CP and CKR, then there is no event about which they disagree. This is Aumann’s (1976) famous “agreement theorem”. It’s often summarized as the claim that “rational people cannot agree to disagree”, though this phrasing can make it seem stronger than it is. Still, it’s a powerful result.

Two people may satisfy CP and CKR, and have different beliefs about some event, if the direction of the difference isn’t common knowledge between them. The difference will simply be due to a difference in information. The mechanism that would tend to eliminate the disagreement—one person updating in the other’s direction—breaks when at least one of the parties doesn’t know which direction to update in.
For example, suppose Jack and Jill have a common prior over the next day’s weather, and common knowledge of the fact they’re both perfectly good at updating on weather forecasts. Then suppose Jack checks his phone. They both know that, unless the posterior probability of rain the next day exactly equals their prior, the probability Jack assigns to rain now differs from Jill’s. But Jill can’t update in Jack’s direction, because she doesn’t know whether to shift her credence up or down.
Likewise, suppose we observe a belief-difference (whose direction isn’t common knowledge, of course) between people satisfying CP and CKR, we trust ourselves to be rational too, and we have these two people’s shared prior. Then we should simply update from our prior in light of whatever we know, including whatever we might know about what each of the others knows. If we see that Jack is reaching for an umbrella, and we see that Jill isn’t because she hasn’t seen Jack, we should update toward rain. Likewise, if we see that a GiveWell researcher assigns a high probability to the event that the Malaria Consortium is the charity whose work most cheaply increases near-term human wellbeing, and we see that some stranger assigns a low probability to any particular charity (including MC), we should update toward MC. There’s no deep mystery about what to do, and we don’t feel troubled when we find ourselves agreeing with one person more than the other.

But we often observe belief-differences between people not satisfying CP and CKR. By the agreement theorem, all disagreements involve departures from CP or CKR: all debates, for instance. And people with different beliefs may lack CP or CKR even when the direction of their disagreement isn’t common knowledge. We may see Jack’s probability of rain differ from Jill’s both because he’s checked the forecast, which predicts rain, and because he’s just more pessimistic about the weather on priors. We may see GiveWell differ from Lant Pritchett both because they’ve done some charity-research Lant doesn’t yet know about and because they’re more pessimistic about economic-growth-focused work than Lant is.
In these cases, what to do is more of a puzzle. If we are to form precise beliefs that are in any sense Bayesian, we ultimately have to make judgments about whose prior (or which other prior) seems more sensible, and about who seems more rational (or, if we think they’re both making mistakes in the same direction, what would be a more rational direction). But the usual recommendation for how to make a judgment about something—just start with our prior, learn what we can (including from the information embedded in others’ disagreements), and update by Bayes’ Rule—now feels unsatisfying. If we trust this judgment to our own priors and our own information processing, but the very problem we’re adjudicating is that people can differ in their priors and/​or their abilities to rationally process information, why should we especially trust our own? [2][3]
Our response to some observed possible difference in priors or information-processing abilities, as opposed to differences in information, might be called “epistemically modest” to the extent that it involves giving equal weight to others’ judgments. I won’t try to define epistemic modesty more precisely here, since how exactly to formalize and act on our intuitions in its favor is, to my understanding, basically an unsolved challenge.[4] It’s not as simple as, say, just splitting the difference among people; everyone else presumably thinks they’re already doing this to the appropriate degree. But I think it’s hard to deny that at least some sort of epistemic modesty, at least sometimes, must be on the right track.

In sum: when we see people’s beliefs differ, deciding what to believe poses theoretical challenges to the extent that we can attribute the belief-difference to a lack of CP or CKR. And the challenges it poses concern how exactly to act on our intuitions for epistemic modesty.

Two mistaken responses to disagreement

This framing helps us spot what are, I think, the two main mistakes made in the face of disagreement.

The first mistake is to attribute the disagreement to an information-difference, implicitly or explicitly, and proceed accordingly.
In the abstract, it’s clear what the problem is here. Disagreements cannot just be due to information-differences.
To reiterate: when belief-differences in some domain are due entirely to differences in information, we just need to get clear on what information we have and what it implies. But a disagreement must be due, at least in part, to (possible) differences in the disagreers’ priors or information-processing abilities. Given such differences, if we’re going to trust to our own (or our friends’) prior and rationality, giving no intrinsic weight to others’ judgments, we need some compelling—perhaps impossible—story about how we’re avoiding epistemic hubris.
Though this might be easy enough to accept in the abstract, it often seems to be forgotten in practice. For example, Eliezer Yudkowsky and Mark Zuckerberg disagree on the probability that AI will cause an existential catastrophe. When this is pointed out, people sometimes respond that they can unproblematically trust Yudkowsky, because Zuckerberg hasn’t engaged nearly as much with the arguments for AI risk. But nothing about the agreement theorem requires that either party learn everything the other knows. Indeed, this is what makes it an interesting result! Under CP and CKR, Zuckerberg would have given higher credence to AI risk purely on observing Yudkowsky’s higher credence, and/​or Yudkowsky would have given lower credence to AI risk purely on observing Zuckerberg’s lower credence, until they agreed.[5] The testimony of someone who has been thinking about the problem for decades, like Yudkowsky, is evidence for AI risk—but the fact that Zuckerberg still disbelieves, despite Yudkowsky’s testimony, is evidence against; and the greater we consider Yudkowsky’s expertise, the stronger both pieces of evidence are. Simply assuming that the more knowledgeable party is closer to right, and discarding the evidence given by the other party’s skepticism, is an easy path to an echo chamber.
This is perhaps easier to see when we consider a case where we give little credence to the better-informed side. Sikh scholars (say) presumably tend to be most familiar with the arguments for and against Sikhism, but they shouldn’t dismiss the rest of the world for failing to engage with the arguments. Instead they should learn something from the fact that most people considered Sikhism so implausible as not to engage at all. I make this point more long-windedly here.
Likewise, but more subtly, people sometimes argue that we can inspect how other individuals and communities tend to develop their beliefs, and that when we do, we find that practices in the EA community are exceptionally conducive to curating and aggregating information.
It’s true that some tools, like calibration training, prediction markets, and meta-analysis, do seem to be used more widely within the EA community than elsewhere. But again, this is not enough to explain disagreement. Unless we also explicitly posit some possible irrationality or prior-difference, we’re left wondering why the non-EAs don’t look around and defer to the people using, say, prediction markets. And it’s certainly too quick to infer irrationality from the fact that a given group isn’t using some epistemic tool. Another explanation would be that the tool has costs, and that on at least some sorts of questions, those costs are put to better use in other ways. Indeed, the corporate track record suggests that prediction markets can be boondoggles.
Many communities argue for deference to their own internal consensuses on the basis of their use of different tools. Consider academia’s “only we use peer review”, for instance, or conservatives’ “only we use the wisdom baked into tradition and common sense”. Determining whose SOPs are actually more reliable seems hard, and anyway the reliability presumably depends on the type of question. In short, given disagreement, attempts to attribute belief-differences entirely to one party’s superior knowledge or methodology must ultimately, to some extent, be disguised cases of something along the lines of “They think they’re the rational ones, and so do we, but dammit, we’re right.”

The second mistake is to attribute the disagreement to a (possible) difference in priors or rationality and proceed accordingly.
Again, in the abstract, it’s clear what the problem is here. To the extent that a belief-difference is due to a possible difference in priors or rationality—i.e. a lack of CP or CKR—no one knows how to “proceed accordingly”. We want to avoid proceeding in a way that feels epistemically immodest, but, at least as of this writing, it’s unclear how to operationalize this.[6]
But again, this seems to be a hard lesson to internalize. The most common approach to responding to non-information-driven disagreement in a way that feels epistemically modest—the “template” outlined in the introduction, which I’ve used plenty—is really, on reflection, no solution at all. It’s an attempt to act as if there were no problem.[7] Look at the wording again: “assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole”, and then “act if you believe what others believe”. Given disagreement, what does “on the whole” mean? What on earth do “others believe”? Who even are the “others”? Do infants count, or do they have to have reached some age of maturity? Deferring to “others on the whole” is just another call for some way of aggregating judgments: something the disagreers all already feel they’ve done. The language sounds modest because it implicitly suggests that there’s some sort of monolithic, non-EA supermajority opinion on most issues, and that we can just round this off to “consensus” and unproblematically defer to it. But there isn’t; and even if there were, we couldn’t. Disagreement between a minority and a majority is still disagreement, and deferring to the majority is still taking a stand.
Take even the case for deferring to the probabilities of important events implied by market prices. The probability of an event suggested by the market price of some relevant asset—if such a probability can be pinned down at all—should be expected to incorporate, in some way, all the traders’ information. This mass of implicit information is valuable to anyone, but there’s no clear reason why anyone should also be particularly fond of a wealth-weighted average of the traders’ priors, not to mention information-processing quirks. When people have different priors, they should indeed be expected to bet with each other, including via financial markets (Morris, 1994). If two people trade some asset on account of a difference in their priors about its future value, why should an onlooker adopt the intermediate beliefs that happen to be suggested by the terms of the trade? If one of them is struck by lightning before the trade can be executed, do you really always want to take the other side of the trade, regardless of who was struck?

A minimal solution: update on information, despite not knowing how to deal with other sources of disagreement

Both mistakes start by “attributing” the disagreement to one thing: a difference in information (#1) or a possible difference in priors or rationality (#2). But a disagreement may exhibit both differences. That is—and maybe this is obvious in a way, but it took me a while to internalize!—though a disagreement cannot consist only of a difference in information, a disagreement produced by a lack of CP or CKR can be exacerbated by a difference in information. When we witness such a disagreement, we unfortunately lack a clear way to resolve the bit directly attributable to possible prior- or rationality-differences. But we can still very much learn from the information-differences, just as we can learn something in the non-common-knowledge belief-difference case of Jack, Jill, and the rain.
Sometimes, furthermore, this learning should move us to take an extreme stand on some question regardless of how we deal with the prior- or rationality-differences. That is, knowledge—even incomplete—about why disagreeing parties believe what they believe can give us unusual beliefs, even under the most absolute possible standard of epistemic modesty.
I’ll illustrate this with a simple example in which everyone has CKR but not CP. I’ll do this for two reasons. First, I find relaxations of CP much easier to think about than relaxations of CKR. Second, though the two conditions are not equivalent, many natural relaxations of CKR can be modeled as relaxations of CP: there is a formal similarity between failing to, say, sufficiently increase one’s credence in some event on some piece of evidence and simply having a prior that doesn’t put as much weight on the event given that evidence. (Brandenburger et al. (1992) and Morris (1991, ch. 4) explore the relationship between the conditions more deeply.) In any event, the goal is just to demonstrate the importance of information-differences in the absence of CP and CKR, so we can do this by relaxing either one.

The population is divided evenly among people with two types of priors regarding some event x: skeptics, for whom the prior probability of x is 13, and enthusiasts, for whom it’s 12. The population exhibits CKR and a common knowledge of the distribution of priors.
It’s common knowledge that Person A has done some research. There is a common prior over what the outcome of the research will be: a 110 chance that it will justify increasing one’s credence in x by 16 (in absolute terms), a 110 chance that it will justify decreasing one’s credence in x by 16, and an 810 chance that it will be uninformative.
It’s also common knowledge through the population that, after A has conditioned on her research, she assigns x a probability of 12. A given member of the population besides A—let’s call one “B”—knows from A’s posterior that her research cannot have made x seem less likely, but he doesn’t know whether A’s posterior is due to the fact that she is a skeptic whose research was informative or an enthusiast whose research was uninformative. B considers the second scenario 8x as likely as the first. Thinking there’s a 19 chance that he should increase his credence by 16 in light of A’s findings and a 89 chance he should leave his credence unchanged, he increases his credence by 154. If he was a skeptic, his posterior is 1954; if he was an enthusiast, his posterior is 2854.
But an onlooker, C, who knows that A is a skeptic and did informative research will update either to 12, if he too is a skeptic, or to 23, if he started out an enthusiast. So even if C is confused about which prior to adopt, or how to mix them, he can at least be confident that he’s not being overenthusiastic if he adopts a credence of 12. This is true even though there is public disagreement, with half the population assigning x a probability well below 12 (the skeptical Bs, with posteriors of 1954) and half the population assigning x a probability only slightly above 12 (the enthusiastic Bs, with posteriors of 2854). And the disagreement can persist even if C’s own posterior is common knowledge (at least if it’s 12, in this example), if others don’t know the reasons for C’s posterior either.[8]
Likewise, an onlooker who knows that A is an enthusiast and did uninformative research will not update at all. He might maintain a credence in x of 13. This will be lower even than that of other skeptics, who update slightly on A’s posterior thinking that it might be better informed than it is.

EA applications

So: if we have spent a long time following the EA community, we will often be unusually well-informed about the evolution of an “unusual EA belief”. At least as long as this information remains obscure, it is not necessarily epistemically immodest at all to adopt a belief that is much closer to the EA consensus than the non-EA consensus, given a belief-difference that is common knowledge.
To put this another way, we can partly salvage the idea that EA thought on some question is particularly trustworthy because others “haven’t engaged with the arguments”. Yes, just pointing out that someone hasn’t engaged with the arguments isn’t enough. The fact that she isn’t deferring to EA thought reveals that she has some reason to believe that EA thought isn’t just different from her own on account of being better informed, and sometimes, we should consider the fact that she believes this highly informative. But sometimes, we might also privately know that the belief is incorrect. We can recognize that many unusual beliefs are most often absorbed unreflectively by their believer from his surroundings, from Sikhism to utilitarianism—and, at the same time, know that EAs’ unusually low credences in existential catastrophe from climate change actually do just stem from thinking harder about x-risks.
We should be somewhat more suspicious of ourselves if we find ourselves adopting the unusual EA beliefs on most or all controversial questions. What prevents universal agreement, at least in a model like that of the section above, is the fact that the distribution of beliefs in a community like EA really may be unusual on some questions for arbitrary reasons.
Even coming to agree with EA consensus on most questions is not as inevitably suspicious as it may seem, though, because the extent to which a group has come to its unusual beliefs on various questions by acquiring more information, as opposed to having unusual priors, may be correlated across the questions. For example, most unambiguously, if one is comfortable assigning an unusually high probability to the event that AI will soon kill everyone, it’s not additionally suspicious to assign an unusually high probability to the event that AI will soon kill at least a quarter of the population, or at least half. More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact.

In sum, when you see someone or some group holding an unusual belief on a given controversial question, in disagreement with others, you should update toward them if you have reason to believe that their unusual belief can be attributed more to being better informed, and less to other reasons, than one would expect on a quick look. Likewise, you should update away from them, toward everyone else, if you have reason to believe the reverse. How you want to update in light of a disagreement might depend on other circumstances too, but we can at least say that updates obeying the pattern above are unimpeachable on epistemic modesty grounds.
How to apply this pattern to any given unusual EA belief is very much a matter of judgment. One set of attempts might look something like this:

  • AI-driven existential risk — Weigh the credences of people in the EA community slightly more heavily than others’. Yes, many of the people in EA concerned about AI risk were concerned before much research on the subject was done—i.e. their disagreement seems to have been driven to some extent by priors—and the high concentration of AI risk worry among later arrivals is due in part to selection, with people finding AI risk concerns intuitively plausible staying and those finding them crazy leaving. But from the inside, I think a slightly higher fraction of the prevalence of AI risk concern found in the EA community is due to updating on information than it would make sense for AI-risk-skeptics outside the EA community to expect.

  • AI-driven explosive growth — Weigh the credences of people in the EA community significantly more heavily than others’. The caveats above apply a bit more weakly in this case: Kurzweil-style techno-optimism has been a fair bit more weakly represented in the early EA community than AI risk concern, and there seems to have been less selection pressure toward believing in it than in believing in AI risk. The unusually high credences that many EAs assign to the event that AI will soon drive very rapid economic growth really do seem primarily to be driven by starting with typical priors and then doing a lot of research; I know at least that they are in my case. (Also, on the other side of the coin, my own “inside knowledge” of the economics community leads me to believe that their growth forecasts are significantly less informed than you would have thought from the outside.)[9]

  • Polyamory — Weigh the credences of people in the EA community less heavily than others’. From the outside, people might have thought, “those EAs seem to really think things through; if a lot of them think polyamory can work just fine in the modern age, maybe they’re ahead of the curve”. But actually, EAs don’t seem to have put any more thought than non-EAs—and arguably a fair bit less—into the question of what sort of relationship norms make for flourishing lives and communities.[10] The prevalence of polyamory can be much more straightforwardly attributed to the fact that the community selects for, say, the personality trait openness: i.e. for people with unusual priors about this kind of thing.

You may well disagree with the above attempts at applying the policy. Indeed, you probably will; it would be surprising to find that the attribution of unusual EA beliefs is difficult “from the outside”, but that the exact right way to do it is obvious to anyone who reads the EA Forum. Even if you disagree with the above attempts at an application, though, and indeed even if you think this policy still departs insufficiently from “deferring to everyone equally”, at least we have a negative result. The template of the introduction goes too far in the pursuit of epistemic modesty. We should try very hard to avoid creating echo chambers, but not to the point of modeling the ideal EA community as one pursuing atypical goals with typical beliefs. In the face of disagreement, we all have to do some thinking.

Thanks to Luis Mota for helpful comments on the post, and to David Thorstad for giving it an epistemologist’s seal of approval.

  1. ^

    One somewhat related piece I know of is Yudkowsky’s (2017) “Against Modest Epistemology”. But I would summarize its view, and how it differs from mine, as follows:
    a) Something must be wrong with epistemic modesty, because it would require you to give non-negligible credence to the existence of God, or to the event that you’re as crazy as someone in a psych ward. (But I do give non-negligible credence on both counts. In any event I certainly don’t find the conclusions absurd enough to use as a reductio.)
    b) The common-sense solution, which is correct, is to keep track of how reliable different people tend to be, including yourself, and give people more cred when they have better track records. (This seems reasonable enough in practice, but how does it work in theory? What I’m looking for is a more fleshed-out story of how to reconcile a procedure like this with the intuitions for modesty we may have when the disagreers also feel they’ve been keeping track, giving more reliable people more cred, and so on.)
    Overviews of the philosophy literature on the epistemology of disagreement are linked in footnote 4.

  2. ^

    If we worry about whether we’re choosing the “right prior” (and not just about whether we’re processing information properly), and if what we mean by “processing information properly” is following Bayes’ Rule, then we’re endorsing Objective Bayesianism. As noted earlier, this post is written from an Objective Bayesian perspective.

  3. ^

    To clarify: disagreers may both be rational, and have the same prior, yet lack CP or CKR.
    Jack and Jill may have CKR but be drawn from a population whose members have different priors, for instance. Then even if Jack and Jill happen to have the same prior, they won’t know that about each other. They may therefore persist in disagreement, each thinking that the other’s different beliefs may not be due to the other’s access to better information (which would warrant an update) but due to the other’s different prior.
    Such cases may seem less problematic than cases in which we know that one or both of the disagreers themselves are irrational or don’t share a prior. And in certain narrow cases, I believe they are less problematic. But often, a similar challenge remains. The two people in front of us may happen, in our judgment, to be rational and share a prior (though they don’t have common knowledge of that fact between them); but what makes this fact not common knowledge between them is that, in some sense, they might not have done. Under these circumstances, it can be reasonable to worry that the prior this pair happens to share isn’t “the right one”, or that we ourselves are not “one of the rational people”. From here on, I’ll just refer to absences of CP/​CKR as possible differences in priors or information-processing abilities, and note that they can raise theoretical issues that belief-differences attributable entirely to information-differences do not.

  4. ^

    Most of the literature I cite throughout this post is from economists, since it’s what I know best. But there is also a large, and mostly rather recent, literature on disagreement in philosophy (recent according to Frances and Matheson’s 2018 SEP article, which incidentally seems to provide a good overview). I have hardly read all 655 items tagged “Epistemology of Disagreement” on PhilPapers, so maybe it’s immodest of me to think I have anything to say; but on reading the most cited and skimming a few others, I think I can at least maintain that there’s no consensus about what to make of a belief-difference that persists in the face of common knowledge.

  5. ^

    Technically, to guarantee that announcing posteriors back and forth produces convergence in beliefs, we require one more assumption than is required for Aumann’s theorem, namely finite information partitions. See Geanakoplos and Polemarchakis (1982).

  6. ^

    At least outside of certain narrow cases, as noted tangentially in footnote 3, which I don’t believe are the relevant empirical cases.

  7. ^

    This point is essentially made at greater length by Morris (1995).

  8. ^

    In this example, if C sets a posterior of something other than 12, everyone who knows C’s credence will be able to infer that A’s research was informative. Everyone will therefore update all the way to 12 or 23. But this is just an artifact of how stylized the example is. If the C’s prior and the informativeness of A’s research follow continuous distributions supported everywhere, C can be known to have updated in any way without this revealing much about how informative A’s research was.

  9. ^

    That said, the fact that some people assign high credence to AI-driven explosive growth is hardly a secret; and since this seems like it would affect how one should invest, investors have strong incentives to look into the question of why believers believe what they believe. (Say) Tom Davidson’s report on explosive growth is somewhat long, and the fact that he wasn’t a big singularitarian a few years ago is somewhat obscure, but not so long or so obscure as to account for a large, persistent belief-gap. And indeed, it seems that fewer people consider AI-driven explosive growth scenarios absurd than used to; the belief-gap has closed somewhat. But if we’re going to attribute most of the gap to a difference in information, I think we do need more of a story about why it has persisted as long as it has.
    One possible answer would be that, actually, even if there is a decent chance that AI-driven explosive growth is coming, that shouldn’t change how most people invest (or live in general)—and in fact that this is obvious enough before looking into it that for most people, including most large investors, looking into it hasn’t been worth the cost.
    Similarly, one could answer that a growth explosion seems improbable enough that looking into it—even to the point of looking into how its current believers came to believe in it—wasn’t worth the cost in expectation. This raises the question of why investing this hypothesis deeply enough to write long reports on it would be worth the cost at Open Phil when even skimming the reports on it wasn’t worth the cost at, say, Goldman Sachs. But maybe it was. Maybe the hypothesis is unlikely enough ex ante that only a motivation to avoid “astronomical waste” makes it worth looking into at all.
    But if no story making sense of a large, persistent information-difference seems plausible, one should presumably be skeptical that it’s what accounts for much of the disagreement. And if it doesn’t, the procedure defended in this post does not justify giving EAs’ credences in explosive growth [much] more weight than others’.
    The general principle here is that, to think you have an “edge” as a result of some information that it would costly for others to acquire (like the evolution of your friends’ beliefs), you have to believe that this the value of this information is smaller—ex ante, from the others’ perspective—than the costs of acquiring it.

  10. ^

    But note that you don’t have to think that EAs have put less judgment into this question than others to conclude that EAs’ credences should get less weight than others’. You only need to think that EAs have put less judgment into this question than one would have reason to expect from outside the community.