FWIW, the “deals and fairness agreement” section of this blogpost by Karnofsky seems to agree about (or at least discuss) trade between different worldviews :
It also raises the possibility that such “agents” might make deals or agreements with each other for the sake of mutual benefit and/or fairness.
Methods for coming up with fairness agreements could end up making use of a number of other ideas that have been proposed for making allocations between different agents and/or different incommensurable goods, such as allocating according to minimax relative concession; allocating in order to maximize variance-normalized value; and allocating in a way that tries to account for (and balance out) the allocations of other philanthropists (for example, if we found two worldviews equally appealing but learned that 99% of the world’s philanthropy was effectively using one of them, this would seem to be an argument – which could have a “fairness agreement” flavor – for allocating resources disproportionately to the more “neglected” view). The “total value at stake” idea mentioned above could also be implemented as a form of fairness agreement. We feel quite unsettled in our current take on how best to practically identify deals and “fairness agreements”; we could imagine putting quite a bit more work and discussion into this question.
Different worldviews are discussed as being incommensurable here (under which maximizing expected choice-worthiness doesn’t work). My understanding though is that the (somewhat implicit but more reasonable) assumption being made is that under any given worldview, philanthropy in that worldview’s preferred cause area will always win out in utility calculations, which makes sort of deals proposed in “A flaw in a simple version of worldview diversification” not possible/useful.
Some related thoughts and questions:
NunoSempere points out that EA could have been structured in a radically different way, if the “specific cultural mileu” had been different. But I think this can be taken even further. I think that it’s plausible that if a few moments in the history of effective altruism had gone differently, the social makeup—the sort of people that make up the movement—and their axiological worldviews—the sorts of things they value—might have been radically different too.
As someone interested in the history of ideas, I’m fascinated by what our movement has that made it significantly different than the most likely counterfactual movements. Why is effective altruism the way it is? A number of interesting brief histories have been written about the history of EA (and longer pieces about more specific things like Moynihan’s excellent X-Risk) but I often feel that there are a lot of questions about the movement’s history, especially regarding tensions that seem to present themselves between the different worldviews that make up EA.
For example,
How much was it the individual “leaders” of EA who brought together different groups of people to create a big-tent EA, as opposed to the communities themselves already being connected? (Toby Ord says that he connected the Oxford GWWC/EA community to the rationality community, but people from both of these “camps” seem to be at Felicifia together in the late 2000s.)
When connecting the history of thought, there’s a tendency to put thinkers after one another in lineages as if they all read and are responding to those who came before them. Parfit lays the ground for longtermism in the the late 20th century in Reasons and Persons and Bostrom continues the work when presenting the idea of x-risk in 2001. Did Bostrom know of and expand upon Parfit’s work, or was Bostrom’s framing independent of that, based on risks discussed by the Extropians, Yudkowsky, SL4, etc? There (maybe) seems to be multiple discovery of early EA ideas in separate creation of the Oxford/GWWC community and GiveWell. Is something like that going on for longtermism/x-risk?
What would EA look like today without Yudkowsky? Bostrom? Karnofsky/Hassenfeld? MacAskill/Ord?
What would EA look like today without Dustin Moskovitz? Or if we had another major donor? (One with different priorities?)
What drove the “longtermist turn?” A shift driven by leaders or by the community?
A few interesting Yudkowsky (not be taken as current opinions, for historical purposes) quotes (see also Extropian Archaeology):
It’s fascinating to me that this is the reason that there’s a “rationality” community around today. (See also) What would EA look like without it? Would it really be any less rational? What does a transhumanisty non-AI-worried EA look like?—I feel like that’s what we might have had without Yudkowsky.
One last thing: