In practice I don’t think these trades happen, making my point relevant again.
My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations
I’m not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:
What happens when you have more information about the world? Say there is a malaria vaccine, and global health interventions after that are less cost-effective.
What happens when you have more information about what you value? Say you reflect and you think that animals matter more than before/that the animal worldview is more likely to be the case.
What happens when you find a way to compare the worldviews? What if you have trolley problems comparing humans to animals, or you realize that units of existential risk avoided correspond to humans who don’t die, or...
Then you either add the epicycles or you’re doing something really dumb.
My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations, which makes sort of deals proposed in “A flaw in a simple version of worldview diversification” not possible/use
I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.
In practice I don’t think these trades happen, making my point relevant again.
I’m not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:
What happens when you have more information about the world? Say there is a malaria vaccine, and global health interventions after that are less cost-effective.
What happens when you have more information about what you value? Say you reflect and you think that animals matter more than before/that the animal worldview is more likely to be the case.
What happens when you find a way to compare the worldviews? What if you have trolley problems comparing humans to animals, or you realize that units of existential risk avoided correspond to humans who don’t die, or...
Then you either add the epicycles or you’re doing something really dumb.
I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.