You might be right that the geometric mean of odds performs better than Ucpo as an updating rule although I’m still unsure exactly how you would implement it. If you used the geometric mean of odds as an updating rule for a first person and you learn the credence of another person, would you then change the weight (in the exponent) you gave the first peer to 1⁄2 and sort of update as though you had just learnt the first and second persons’ credence? That seems pretty cumbersome as you’d have to keep track of the number of credences you already updated on for each proposition in order to assign the correct weight to a new credence. Even if using the geometric mean of odds(GMO) was a better approximation of the ideal bayesian response than Upco (which to me is an open question), it thus seems like Upco is practically feasible and GMO is not.
Here’s one huge problem with the Upco method as you present it: two people think that it’s a 1⁄6 chance of rolling a six on a (fair) die. This opinion shouldn’t change when you update on others’ opinions. If you used Upco, that’s a 1:5 ratio, giving final odds of 1:25 - clearly incorrect. On the other hand, a geometric mean approach gives sqrt((1*1)/(5*5))=1:5, as it should be.
If one person reports credence 1⁄6 in rolling a six on a fair die and this is part of a partition containing one proposition for each possible outcome {”Die lands on 1“, Die lands on 2”, … “Die lands on 6”}, then the version of upco that can deal with complex partitions like this will tell you not to update on this credence (see section on “arbitrary partitions”). I think the problem you mention just occurs because you are using a partition that collapses all five other possible outcomes into one proposition—“the die will not land on 6”.
This case does highlight that the output of Upco depends on the partition you assume someone to report their credence over. But since GMO as an updating rule just differs from Upco in assigning a weight of 1/n to each persons credence (where n is the number of credence already learned), I’m pretty sure you can find the same partition dependence with GMO.
Hey Daniel,
thanks for engaging with this! :)
You might be right that the geometric mean of odds performs better than Ucpo as an updating rule although I’m still unsure exactly how you would implement it. If you used the geometric mean of odds as an updating rule for a first person and you learn the credence of another person, would you then change the weight (in the exponent) you gave the first peer to 1⁄2 and sort of update as though you had just learnt the first and second persons’ credence? That seems pretty cumbersome as you’d have to keep track of the number of credences you already updated on for each proposition in order to assign the correct weight to a new credence. Even if using the geometric mean of odds(GMO) was a better approximation of the ideal bayesian response than Upco (which to me is an open question), it thus seems like Upco is practically feasible and GMO is not.
If one person reports credence 1⁄6 in rolling a six on a fair die and this is part of a partition containing one proposition for each possible outcome {”Die lands on 1“, Die lands on 2”, … “Die lands on 6”}, then the version of upco that can deal with complex partitions like this will tell you not to update on this credence (see section on “arbitrary partitions”). I think the problem you mention just occurs because you are using a partition that collapses all five other possible outcomes into one proposition—“the die will not land on 6”.
This case does highlight that the output of Upco depends on the partition you assume someone to report their credence over. But since GMO as an updating rule just differs from Upco in assigning a weight of 1/n to each persons credence (where n is the number of credence already learned), I’m pretty sure you can find the same partition dependence with GMO.