A different thing you could do, instead of trading models, is compromise by assuming that there’s a 50% chance that your model is right and a 50% chance that your peer’s model is right. Then you can do utility calculations under this uncertainty. Note that this would have the same effect as the one you desire in your motivating example: Alice would scrub surfaces and Bob would wear a mask.
This would however make utility calculations twice as difficult as compared just using your own model, since you’d need to compute the expected utility under each model. But note that this amount of computational intensity is already assumed by the premise that it makes sense for Alice and Bob to trade models. In order for Alice and Bob to reach this conclusion, each needs to compute their utility under each action in each of their models.
I would say that this is more epistemically sound than switching models with your peer, since it’s reasonably well-motivated by the notion that you are epistemic peers and could have ended up in a world where you had had the information your peer has and vice versa.
But the fundamental issue you’re getting at here is that reaching an agreement can be hard, and we’d like to make good/informed decisions anyway. This motivates the question: how can you effectively improve your decision making without paying the cost required by trying to reach an agreement?
One answer is that you can share partial information with your peer. For instance, maybe Alice and Bob decide that they will simply tell each other their best guess about the percentage of COVID transmission that is airborne and leave it at that (without trying to resolve subsequent disagreement). This is enough to, in most circumstances, cause each of them to update a lot (and thus be much better informed in expectation) without requiring a huge amount of communication.
Which is better: acting as if each model is 50% to be correct, or sharing limited information and then updating? I think the answer depends on (1) how well you can conceptualize your peer’s model, (2) how hard updating is, and (3) whether you’ll want to make similar decisions in the future but without communicating. The sort of case when the first approach is better is when both Alice and Bob have simple-to-describe models and will want to make good COVID-related decisions in the future without consulting each other. The sort of case when the second approach is better is when Alice and Bob have difficult-to-describe models, but have pretty good heuristics about how to update their probabilities based on the other’s probabilities.
I started making a formal model of the “sharing partial information” approach and came up with an example of where it makes sense for Alice and Bob to swap behaviors upon sharing partial information. But ultimately this wasn’t super interesting because the underlying behavior was that they were updating on the partial information. So while there are some really interesting questions of the form “How can you improve your expected outcome the most while talking to the other person as little as possible”, ultimately you’re getting at something different (if I understand correctly) -- that adopting a different model might be easier than updating your own. I’d love to see a formal approach to this (and may think some more about it later!)
Cool idea! Some thoughts I have:
A different thing you could do, instead of trading models, is compromise by assuming that there’s a 50% chance that your model is right and a 50% chance that your peer’s model is right. Then you can do utility calculations under this uncertainty. Note that this would have the same effect as the one you desire in your motivating example: Alice would scrub surfaces and Bob would wear a mask.
This would however make utility calculations twice as difficult as compared just using your own model, since you’d need to compute the expected utility under each model. But note that this amount of computational intensity is already assumed by the premise that it makes sense for Alice and Bob to trade models. In order for Alice and Bob to reach this conclusion, each needs to compute their utility under each action in each of their models.
I would say that this is more epistemically sound than switching models with your peer, since it’s reasonably well-motivated by the notion that you are epistemic peers and could have ended up in a world where you had had the information your peer has and vice versa.
But the fundamental issue you’re getting at here is that reaching an agreement can be hard, and we’d like to make good/informed decisions anyway. This motivates the question: how can you effectively improve your decision making without paying the cost required by trying to reach an agreement?
One answer is that you can share partial information with your peer. For instance, maybe Alice and Bob decide that they will simply tell each other their best guess about the percentage of COVID transmission that is airborne and leave it at that (without trying to resolve subsequent disagreement). This is enough to, in most circumstances, cause each of them to update a lot (and thus be much better informed in expectation) without requiring a huge amount of communication.
Which is better: acting as if each model is 50% to be correct, or sharing limited information and then updating? I think the answer depends on (1) how well you can conceptualize your peer’s model, (2) how hard updating is, and (3) whether you’ll want to make similar decisions in the future but without communicating. The sort of case when the first approach is better is when both Alice and Bob have simple-to-describe models and will want to make good COVID-related decisions in the future without consulting each other. The sort of case when the second approach is better is when Alice and Bob have difficult-to-describe models, but have pretty good heuristics about how to update their probabilities based on the other’s probabilities.
I started making a formal model of the “sharing partial information” approach and came up with an example of where it makes sense for Alice and Bob to swap behaviors upon sharing partial information. But ultimately this wasn’t super interesting because the underlying behavior was that they were updating on the partial information. So while there are some really interesting questions of the form “How can you improve your expected outcome the most while talking to the other person as little as possible”, ultimately you’re getting at something different (if I understand correctly) -- that adopting a different model might be easier than updating your own. I’d love to see a formal approach to this (and may think some more about it later!)