Epistemic Trade: A quick proof sketch with one example

Note on history of this post: I wrote a draft of this post last (2020) summer when I was both more obsessed with covid-19 related questions and interested in exploring my fit for research into philosophy/​macrostrategy, and this post has strong echoes of both. Since then I’ve decided that there’s greater comparative advantage and personal fit for me in empirical cause prioritization, at least in the short term. I ended up deciding I should publish it anyway, and have done <15 minutes of editing between August 2020 and today.

There are two important caveats:

  • Timeliness: Because the covid-y example originated last (2020) summer, it likely already looks dated. In addition, I did not bother to look at the recent (post Aug 2020) literature on peer disagreement or related issues before publishing.

  • Quality: The post is of lower quality and will look more rushed than I’d ideally like, however I’ve decided to publish this sooner rather than later since I haven’t worked on it for almost a year and realistically I’m unlikely to work on it again any time soon.

Introduction

In the middle of a conversation with a friend, I came up with a minor idea in the intersection of a number of philosophy questions that many EAs and rationalists are interested in, including epistemic deference, peer disagreement, moral trade, acausal trade and epistemic game theory.

I’m not claiming that this is particularly important or insightful, but I and a few people I talked to thought it was interesting. So I decided to write it up in case you might also find puzzling about it and related issues interesting/​fun!

The article is written as a proof sketch rather than a proof.

Claim

Even when aggregating beliefs are costly, it can be Pareto-efficient to just act as if “swapping” beliefs.

A COVID-y Example

To clarify this, I’ll give a covid-y example. Alice and Bob are acquaintances, who live far away and are unlikely to infect each other. Assume that they are selfish (only care about personal risk) and have similar objectives around COVID-19 (they don’t want the possibility of death or long-term disability) and overall risk assessments (their overall beliefs in the risk of COVID-19 is fairly similar). However, they have very different internal risk models.

Alice thinks that aerosol transmission is the most dangerous/​likely source of COVID-19 transmission, and that the best intervention to prevent this is wearing N95 masks. She’s very skeptical of surface transmission (and correspondingly, hand hygiene).

Bob thinks that surface transmission is the most dangerous/​likely source of COVID-19 transmission, and that the best intervention to prevent this is proper adherence to hand hygiene. He’s very skeptical of aerosol transmission (and correspondingly, masks).

Alice and Bob are epistemic peers. They both respect each other a lot and don’t think one is necessarily more knowledgeable than the other. However, when they tried to respectfully discuss their disagreements, neither found the other’s arguments convincing.

However, there’s an additional twist: Alice finds wearing masks very costly. She can only wear a properly-fitted mask in ~70% of the situations where she considers not wearing a mask to be dangerous. However, she considers handwashing very easy to practice (if pointless).

Bob’s costs are exactly the opposite of Alice’s.

What should Alice and Bob do? In this case, I claim that even if belief aggregation is impossible, they will be better off swapping risk models, and acting as if the other’s risk models are true. I call this swapping “epistemic trade.”[1]

Expanded Claim

Assume two epistemic peers with a similar objective, very different beliefs/​causal models for how to achieve said common objective, and different costs for various actions. I claim that in some situations when belief aggregation/​updating is costly, it may in expectation be Pareto efficient to just “swap” their causal models.

Assumptions

Let’s explore each of the assumptions in detail.

  1. Two Epistemic Peers

    1. For the relevant domain, Alice and Bob must be epistemic peers.

      1. Intuitively, if Alice has her beliefs because she is a computational fluid dynamics expert who’s well-read on the COVID-19 literature and has run many of her own simulations, while Bob’s source was “has read the US CDC website once in March”, Alice would (justifiably) choose to not update much on Bob’s beliefs.

      2. It’s possible that not only must they believe each other to be epistemic peers, but they must have common knowledge of this. However, I did not explore this angle further.

    2. What defines an epistemic peer?

      1. I think this is the weakest part of the argument, since there may not be a rigorous formal definition. I tried skimming the peer disagreement literature, and got pretty confused.

  2. Similar/​common objective

    1. In this situation, we’re imagining that with reference to model-relevant details, Alice and Bob have a similar (presumably selfish) objective, like not getting COVID-19.

      1. The picture is a lot less intuitive if Alice wants to avoid getting COVID-19 and Bob wants to maximize the number of paperclips in the universe.

    2. We’re also implicitly assuming a similar magnitude

      1. In this case, a similar risk appetite. The model might break if Alice is 10000 times more worried about COVID-19 than Bob.

  3. Different causal models on how to achieve said objective

    1. In our case, different risk models for which things causes COVID-19 transmission and which interventions prevent it

  4. Different cost functions

    1. Trade is possible. In our case, handwashing is less costly for Alice, and mask usage is less costly for Bob.

  5. Belief aggregation is difficult or impossible

  6. Model exchange is possible, and not too costly

    1. In essence, it has to a) be cheaper/​more doable than belief aggregation, and the loss of model fidelity is not too high.

Robustness

Are these assumptions potentially realistic in real-world situations?

  1. Two Epistemic Peers

    1. I don’t know if true epistemic peer is a well-defined term here, but intuitively situations where two people who are sufficiently close epistemic peers have to be fairly common. (For example, two epidemiologists with very different risk models but a similar impact factor, two EAs who respect each other a lot, or two forecasters with a similar Brier score on similar questions).

  2. Similar/​common objective

    1. For the COVID-19 example, this seems like a fairly safe assumption. People may not have identical total risk tolerance, but often variance in this has to be lower than difference in the next two points:

  3. Different causal models on how to achieve said objective

    1. I regularly encounter people in similar reference classes (different epidemiologists on Twitter, say), who have very different risk models for eg, whether SARS-CoV-2 spreads via large droplets vs small airborne droplets vs formites.

  4. Different cost functions

    1. Intuitively, I regularly meet people who seem to have costs that are 10x greater or smaller than mine for the same intervention

      1. Taking preferences at face value, it’s hard to imagine that people would go to rallies to oppose mask usage unless it really matters to them.

  5. Belief aggregation is difficult or impossible

    1. I’m not sure how hard this is in practice. I do feel like there are many times where I talk to people who I mostly consider to be epistemic peers, and after long conversations, we cannot reach consensus (and indeed, if we ignore social politeness, I at least would not have updated towards to their position at all).

  6. Exchange of action plans based on different models is possible, and not too costly

    1. Possible: I do think there are some situations where it’s hard to update your models, but you can act as if you believe the new model.

      1. I think in practice this is healthier than changing your beliefs based on outside-view reasons[2].

    2. Not too costly: One cost is the time cost of communicating your model, and/​or what actions your model entails, during an exchange. Another cost is fidelity: you’re probably worse at operating under a model you don’t believe than one that you do.

      1. For example, if you haven’t thought a lot about the implications of airborne transmission, you may be worse at specifically identifying/​remembering the most necessary situations for mask usage.

      2. I suspect this is not a big deal in practice relative to #3 and #4

      3. Caleb: In practice, an additional cost is becoming the kind of person who does things that violate their beliefs. Sacrificing consistency for meta-consistency. Some people can do this, others can’t.

Some side notes

Is aggregating beliefs/​updating always better than trade?

No, not strictly speaking. Toy counterexamples will be left as an exercise to the reader.

(A example of such a situation is where there are less discontinuities of risk at X% compliance for an intervention, such that an typical intermediate value between two world models is not enough to reach X%)

Is trade necessary?

One might ask: Is trade necessary? If an agent thinks that an epistemic peer’s risk model is less costly (and they place no terminal value in the well-being of the peer), can’t they just unilaterally update to the peer’s risk model?

My guess is that the answer is yes, you need to trade in at least some situations. My intuition goes something like this: consider a case with N risk models and N sets of costs. If everybody thinks it’s epistemically acceptable to unilaterally update, the “correct” thing to do would be to have a race to the bottom where they each adopt the “easiest” risk model to follow. Intuitively (I don’t have a proof), this will lead to greater overall risk in expectation. Thus, having a peer willing to trade serves as a credible signal that your update doesn’t increase overall expected risk.


How to formalize this is unclear to me.

Is this idea…

True?

Having not put too much thought into it, I place ~60% credence that the core idea, or something meaningfully like it, is true.

Novel?

After talking to several people who know more than me, some light scans of Google Scholar, and reading the Stanford Encyclopedia of Philosophy’s sections on peer disagreement and epistemic game theory, I place moderate (~60%?) credence in it being novel. It has startling similar characteristics to the epistemic prisoners’ dilemma, but I still think it’s meaningfully different.


This belief is not very resilient, and I’ll quickly update if somebody comments with a citation.

Useful?

I’m currently fairly optimistic (around 80%?) that this is a sufficiently interesting idea that it’s worth people’s time reading.

I’m much less optimistic (~17%?) that this has direct usefulness in advancing theoretical work, and even more pessimistic (~12%?) that this will have sufficiently interesting practical implications that it’d, e.g., end up as part of a solution to another paper.

Applicability to Effective Altruism

So far, I think this is a solution looking for a problem. The main point of interest I think is that it might generalize some results in moral trade to apply to situations where the fundamental disagreements are epistemic, rather than having different terminal values or moral systems.

A commentator also suggested that there’s applicability to Comparative advantage in EA careers, though here I am personally unsure (and lean against) the practical utility of epistemic trade, vs either a) actually updating or b) trading impact certificates.

Future Work (Possibilities)

Here are things I’d be excited to see future work on:

  1. Making subparts of the argument more rigorous

    1. Figure out what “epistemic peer” really refers to

    2. Cleaning up the “is trade necessary” section

  2. Figure out which assumptions have to be true for epistemic trade to be Pareto efficient:

    1. Do you need monotonicity of risk models/​costs?

    2. What other structure of costs and utilities is/​is not necessary?

  3. Coming up with non-COVID-19 examples

  4. Adding references and tieing in this work with the existing academic literature

  5. Deeper dive to probe whether it’s true/​novel

  6. Thinking harder about potential practical applications to epistemic trade?

    1. AI alignment?

    2. Game theory/​collective decision-making?

    3. How EAs allocate resources?

  7. Coming up with a more precise name than epistemic trade[1]

Future Work (Realistic)

If this post gets a bunch of good and/​or useful feedback without a devastating counterargument, I might (May 2021:~25%? (Note: was ~55% in first draft)) expand it to a longer blog post.

By myself, I think it is unlikely (May 2021:~6%, Original:~13%) that I’ll want to make it substantially more rigorous, e.g. by trying to make the arguments rigorous enough to be a paper or preprint. However, I will of course be very excited if someone with different incentives and interests from me (e.g., a PhD student in epistemology or game theory, or someone from a different domain who could think of practical applications for this idea) wants to collaborate.

Footnotes and Caveats

[1] I checked to make sure the phrase “epistemic trade” isn’t already taken. However, I think this isn’t a very important concept, and reserving the phrase “epistemic trade” seems a bit defect-y. (I also feel this way about the not-so-fundamental Fundamental Attribution Error, as well as most theories that begin with the word “modern”). If people have suggestions for a more precise/​descriptive name that has less probability of future naming collisions, let me know and I’d gladly rename this article.

Thanks goes to my past housemates (especially Adam and Pedro) for indulging in my COVID-19 obsession and various impractical proposals/​flights of fancy that come with thinking about it from all angles, Tushant Jha for being willing to listen to my initial rambly, ill-formed thoughts around the issue and providing a vocabulary and structure to make it more rigorous, Greg Lewis for giving it a fair shake and encouraging me to write up this argument, and Carl Shulman for pointing me to prior work on LessWrong for Epistemic Prisoner’s Dilemmas. As usual, all errors are my own.