FWIW, the “deals and fairness agreement” section of this blogpost by Karnofsky seems to agree about (or at least discuss) trade between different worldviews :
It also raises the possibility that such “agents” might make deals or agreements with each other for the sake of mutual benefit and/or fairness.
Methods for coming up with fairness agreements could end up making use of a number of other ideas that have been proposed for making allocations between different agents and/or different incommensurable goods, such as allocating according to minimax relative concession; allocating in order to maximize variance-normalized value; and allocating in a way that tries to account for (and balance out) the allocations of other philanthropists (for example, if we found two worldviews equally appealing but learned that 99% of the world’s philanthropy was effectively using one of them, this would seem to be an argument – which could have a “fairness agreement” flavor – for allocating resources disproportionately to the more “neglected” view). The “total value at stake” idea mentioned above could also be implemented as a form of fairness agreement. We feel quite unsettled in our current take on how best to practically identify deals and “fairness agreements”; we could imagine putting quite a bit more work and discussion into this question.
Different worldviews are discussed as being incommensurable here (under which maximizing expected choice-worthiness doesn’t work). My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations, which makes sort of deals proposed in “A flaw in a simple version of worldview diversification” not possible/useful.
In practice I don’t think these trades happen, making my point relevant again.
My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations
I’m not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:
What happens when you have more information about the world? Say there is a malaria vaccine, and global health interventions after that are less cost-effective.
What happens when you have more information about what you value? Say you reflect and you think that animals matter more than before/that the animal worldview is more likely to be the case.
What happens when you find a way to compare the worldviews? What if you have trolley problems comparing humans to animals, or you realize that units of existential risk avoided correspond to humans who don’t die, or...
Then you either add the epicycles or you’re doing something really dumb.
My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations, which makes sort of deals proposed in “A flaw in a simple version of worldview diversification” not possible/use
I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.
FWIW, the “deals and fairness agreement” section of this blogpost by Karnofsky seems to agree about (or at least discuss) trade between different worldviews :
Different worldviews are discussed as being incommensurable here (under which maximizing expected choice-worthiness doesn’t work). My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations, which makes sort of deals proposed in “A flaw in a simple version of worldview diversification” not possible/useful.
In practice I don’t think these trades happen, making my point relevant again.
I’m not sure exactly what you are proposing. Say you have three inconmesurable views of the world (say, global health, animals, xrisk), and each of them beats the other according to their idiosyncratic expected value methodology. But then you assign 1/3rd of your wealth to each. But then:
What happens when you have more information about the world? Say there is a malaria vaccine, and global health interventions after that are less cost-effective.
What happens when you have more information about what you value? Say you reflect and you think that animals matter more than before/that the animal worldview is more likely to be the case.
What happens when you find a way to compare the worldviews? What if you have trolley problems comparing humans to animals, or you realize that units of existential risk avoided correspond to humans who don’t die, or...
Then you either add the epicycles or you’re doing something really dumb.
I think looking at the relative value of marginal grants in each worldview is going to be a good intuition pump for worldview diversification type stuff. Then even if, every year, every worldview prefers their marginal grants over those of other worldviews, you can/will still have cases where the worldviews can shift money between years and get more than what they all want.