I haven’t yet seen a formal approach I find satisfying and compelling for questions like “How should I behave when I perceive a significant risk that I’m badly misguided in a fundamental way?”
Seems like the obvious thing would be to frontload testing your hypotheses, try things that break quickly and perceptibly if a key belief is wrong, minimize the extent to which you try to control the behavior of other agents in ways other than sharing information, and share resources when you happen to be extraordinarily lucky. In other words, behave like you’d like other agents who might be badly misguided in a fundamental way to behave.
Seems like the obvious thing would be to frontload testing your hypotheses, try things that break quickly and perceptibly if a key belief is wrong, minimize the extent to which you try to control the behavior of other agents in ways other than sharing information, and share resources when you happen to be extraordinarily lucky. In other words, behave like you’d like other agents who might be badly misguided in a fundamental way to behave.