Nice, I liked the examples you gave (e.g. the meat-eater problem) and I think the post would be stronger if each type had a practical example. E.g. another example I thought of is that a climate change worldview might make a bet about the amount of fossil fuels used in some future year, not because their empirical views are different, but because money would be more valuable to them in slower-decarbonising worlds (this would be ‘insurance’ in your taxonomy I think).
Compromises and trades seem structurally the same to me. The key feature is that the two worldviews have contrasting but not inverse preferences, where there is some ‘middle’ choice that is more than halfway between the worst choice and the best choice from the POV of both worldviews. It doesn’t seem to matter greatly whether the worst choice is neutral or negative according to each worldview. Mathematically, we could say if one worldview’s utility function across options is U and the other worldviews is V, then we are talking about cases where U(A) > U(B) > U(C) and V(A) < V(B) < V(C) and U(B) + V(B) > max(U(A) + V(A), U(C) + V(C)).
Nice, I liked the examples you gave (e.g. the meat-eater problem) and I think the post would be stronger if each type had a practical example. E.g. another example I thought of is that a climate change worldview might make a bet about the amount of fossil fuels used in some future year, not because their empirical views are different, but because money would be more valuable to them in slower-decarbonising worlds (this would be ‘insurance’ in your taxonomy I think).
Compromises and trades seem structurally the same to me. The key feature is that the two worldviews have contrasting but not inverse preferences, where there is some ‘middle’ choice that is more than halfway between the worst choice and the best choice from the POV of both worldviews. It doesn’t seem to matter greatly whether the worst choice is neutral or negative according to each worldview. Mathematically, we could say if one worldview’s utility function across options is U and the other worldviews is V, then we are talking about cases where U(A) > U(B) > U(C) and V(A) < V(B) < V(C) and U(B) + V(B) > max(U(A) + V(A), U(C) + V(C)).