Nice, I liked the examples you gave (e.g. the meat-eater problem) and I think the post would be stronger if each type had a practical example. E.g. another example I thought of is that a climate change worldview might make a bet about the amount of fossil fuels used in some future year, not because their empirical views are different, but because money would be more valuable to them in slower-decarbonising worlds (this would be ‘insurance’ in your taxonomy I think).
Compromises and trades seem structurally the same to me. The key feature is that the two worldviews have contrasting but not inverse preferences, where there is some ‘middle’ choice that is more than halfway between the worst choice and the best choice from the POV of both worldviews. It doesn’t seem to matter greatly whether the worst choice is neutral or negative according to each worldview. Mathematically, we could say if one worldview’s utility function across options is U and the other worldviews is V, then we are talking about cases where U(A) > U(B) > U(C) and V(A) < V(B) < V(C) and U(B) + V(B) > max(U(A) + V(A), U(C) + V(C)).
I agree with you Oscar, and we’ve highlighted this in the summary table where I borrowed your ‘contrasting project preferences’ terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.
I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I’ve also added these to the table.
I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it’s outcomes (i.e. pairs of projects they would each support), then I think I’m with you!
Hmm, yes actually I think my notation wasn’t very helpful. Maybe the simpler framing is that if the agents have opposite preference rankings, but convex ratings such that the middling option is more than halfway between the best and worst options, then a bargain is in order.
Nice, I liked the examples you gave (e.g. the meat-eater problem) and I think the post would be stronger if each type had a practical example. E.g. another example I thought of is that a climate change worldview might make a bet about the amount of fossil fuels used in some future year, not because their empirical views are different, but because money would be more valuable to them in slower-decarbonising worlds (this would be ‘insurance’ in your taxonomy I think).
Compromises and trades seem structurally the same to me. The key feature is that the two worldviews have contrasting but not inverse preferences, where there is some ‘middle’ choice that is more than halfway between the worst choice and the best choice from the POV of both worldviews. It doesn’t seem to matter greatly whether the worst choice is neutral or negative according to each worldview. Mathematically, we could say if one worldview’s utility function across options is U and the other worldviews is V, then we are talking about cases where U(A) > U(B) > U(C) and V(A) < V(B) < V(C) and U(B) + V(B) > max(U(A) + V(A), U(C) + V(C)).
I agree with you Oscar, and we’ve highlighted this in the summary table where I borrowed your ‘contrasting project preferences’ terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.
I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I’ve also added these to the table.
I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it’s outcomes (i.e. pairs of projects they would each support), then I think I’m with you!
Nice!
Hmm, yes actually I think my notation wasn’t very helpful. Maybe the simpler framing is that if the agents have opposite preference rankings, but convex ratings such that the middling option is more than halfway between the best and worst options, then a bargain is in order.