I’m pretty late to the party (perhaps even so late that people forgot that there was a party), but just in case someone is still reading this, I’ll leave my 2 cents on this post.
[Context: A few days ago, I released a post that distils a paper by Kenny Easwaran and others, in which they propose a rule for updating on the credences of others. In a (tiny) nutshell, this rule, “Upco”, asks you to update on someones credence in proposition A by multiplying your odds with their odds.]
1. Using Upco suggests some version of strong epistemic modesty: whenever the product of all the odds of your peers that you have learned is larger than your own odds, then your credence should be dominated by those of others; and if we grant that this is virtually always the case, then strong epistemic modesty follows.
2. While I agree with some version of strong epistemic modesty, I strongly disagree with what I take to be the method of updating on the credence of peers that is proposed in this post: taking some kind of linear average (from hereon referred to as LA). Here’s a summary of reasons why I think Upco is a better updating rule, copied from my post:
Unfortunately, the LA has some undesirable properties (see section 4 of the paper):
Applied in the way sketched above, LA is non-commutative, meaning that LA is sensitive to the order in which you update on the credences of others, and it seems like this should be completely irrelevant to your subsequent beliefs.
This can be avoided by taking the “cumulative average” of the credences of the people you update on, i.e. each time you learn someone’s credence in A you average again over all the credences you have ever learned regarding this proposition. However, now the LA has lost its initial appeal; for each proposition you have some credence in, rationality seems to require you to keep track of everyone you have updated on and the weights you assigned to them. This seems clearly intractable once the number of propositions and learned credences grows large.
Relatedly, LA is also sensitive to whether you update on multiple peers at once or sequentially.
Also, LA does not commute with Bayesian Updating. There are cases where it matters whether you first update on someone’s credence (e.g. regarding the bias of a coin) using the LA and then on “non-psychological” evidence (e.g. the outcome of a coin-flip you observed) using Bayesian Updating or the reverse.
Moreover, LA does not preserve ‘judgments of independence’. That is, if two peers judge two propositions A and B to be independent, i.e. P(AB)=P(A)P(B) and Q(AB)=Q(A)Q(B), then after updating on each other’s credences, independence is not always preserved. This seems intuitively undesirable: if you think that the outcome of (say) a coin flip and a die roll are independent and I think the same—why should updating on my credences change your mind about that?
LA does not exhibit what the authors call “synergy”. That is, suppose P(A)=p and Q(A)=q. Then it is necessarily the case that P+(A), Q+(A) are in the interval [p,q] if they are both applying LA. In other words, using the LA never allows you to update beyond the credence of the most confident person you’ve updated on (or yourself if you are more confident than everybody else).
At first sight, this might seem like a feature rather than a bug. However, this means that the credence of someone less confident than you can never be positive evidence regarding the issue at hand. Suppose you are 95% sure that A is true. Now, for any credence smaller than 95% LA would demand that you update downwards. Even if someone is perfectly rational, has a 94.9% credence in A and has evidence completely independent from yours, LA tells you that their credence is disconfirming evidence.
Perhaps most importantly, since Bayesian Updating does not have these properties, LA does not generally produce the same results. Thus, insofar as we regard Bayesian updating as the normative ideal, we should expect LA to be at best an imperfect heuristic and perhaps not even that.
In sum, LA has a whole host of undesirable properties. It seems like we therefore would want an alternative rule that avoids these pitfalls while retaining the simplicity of LA.
The EaGlHiVe aims to show that such a rule exists. They call this rule “Upco”, standing for “updating on the credences of others”. Upco is a simple rule that avoids many of the problems of LA: preservation of independence, commutativity, synergy, etc. Moreover, Upco produces the same results as Bayesian Updating under some conditions.
I’m pretty late to the party (perhaps even so late that people forgot that there was a party), but just in case someone is still reading this, I’ll leave my 2 cents on this post.
[Context: A few days ago, I released a post that distils a paper by Kenny Easwaran and others, in which they propose a rule for updating on the credences of others. In a (tiny) nutshell, this rule, “Upco”, asks you to update on someones credence in proposition A by multiplying your odds with their odds.]
1. Using Upco suggests some version of strong epistemic modesty: whenever the product of all the odds of your peers that you have learned is larger than your own odds, then your credence should be dominated by those of others; and if we grant that this is virtually always the case, then strong epistemic modesty follows.
2. While I agree with some version of strong epistemic modesty, I strongly disagree with what I take to be the method of updating on the credence of peers that is proposed in this post: taking some kind of linear average (from hereon referred to as LA). Here’s a summary of reasons why I think Upco is a better updating rule, copied from my post: