If you’re Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right.
Minor/Meta aside: I don’t think ‘hedging’ or diversification is the best way to look at this, whether one is an individual or a mega-funder.
On standard consequentialist doctrine, one wants to weigh things up ‘from the point of view of the universe’, and be indifferent as to ‘who is doing the work’. Given this, it looks better to act in the way which best rebalances the humanity-wide portfolio of moral effort, rather than a more narrow optimisation of ‘the EA community’, ‘OPs grants’, or ones own effort.
This rephrases the ‘neglectedness’ consideration. Yet I think people don’t often think enough about conditioning on the current humanity-wide portfolio, or see their effort as being a part of this wider whole, and this can mislead into moral paralysis (and, perhaps, insufficient extremising). If I have to ‘decide what worldview is actually right’, I’m screwed: many of my uncertainties I’d expect to be resilient to a lifetime of careful study. Yet I have better prospects of reasonably believing that “This issue is credibly important enough that (all things considered, pace all relevant uncertainties) in an ideal world humankind would address X people to work on this—given in fact there are Y, Y << X, perhaps I should be amongst them.”
This is a better sketch for why I work on longtermism, rather than overall confidence in my ‘longtermist worldview’. This doesn’t make worldview questions irrelevant (there are lot of issues where the sketch above applies, and relative importance will be one of the ingredients that goes in the mix of divining which one to take), but it means I’m fairly sanguine about perennial uncertainty. My work is minuscule part of the already-highly-diversified corporate effort of humankind, and the tacit coordination strategy of people like me acting on our best guess of the optimal portfolio looks robustly good (a community like EA may allow better ones), even if (as I hope and somewhat expect) my own efforts transpire to have little value.
The reason I shouldn’t ‘hedge’ but Open Phil should is not so much because they can afford to (given they play with much larger stakes, better resolution on ‘worldview questions’ has much higher value to them than to I), but because the returns to specialisation are plausibly sigmoid over the ‘me to OP’ range. For individuals, there’s increasing marginal returns to specialisation: in the same way we lean against ‘donation splitting’ with money, so too with time (it seems misguided for me to spend—say − 30% on bio, 10% on AI, 20% on global health, etc.) A large funder (even though it still represents a minuscule fraction of the humanity-wide portfolio) may have overlapping marginal return curves between its top picks of (all things considered) most promising things to work on, and it is better placed to realise other ‘portfolio benefits’.
I have a paper I’ve been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/financial modern portfolio theory and economic collective action problem theory—but *someone* has me working on more important problems and papers instead… Gregory. ;)
Minor/Meta aside: I don’t think ‘hedging’ or diversification is the best way to look at this, whether one is an individual or a mega-funder.
On standard consequentialist doctrine, one wants to weigh things up ‘from the point of view of the universe’, and be indifferent as to ‘who is doing the work’. Given this, it looks better to act in the way which best rebalances the humanity-wide portfolio of moral effort, rather than a more narrow optimisation of ‘the EA community’, ‘OPs grants’, or ones own effort.
This rephrases the ‘neglectedness’ consideration. Yet I think people don’t often think enough about conditioning on the current humanity-wide portfolio, or see their effort as being a part of this wider whole, and this can mislead into moral paralysis (and, perhaps, insufficient extremising). If I have to ‘decide what worldview is actually right’, I’m screwed: many of my uncertainties I’d expect to be resilient to a lifetime of careful study. Yet I have better prospects of reasonably believing that “This issue is credibly important enough that (all things considered, pace all relevant uncertainties) in an ideal world humankind would address X people to work on this—given in fact there are Y, Y << X, perhaps I should be amongst them.”
This is a better sketch for why I work on longtermism, rather than overall confidence in my ‘longtermist worldview’. This doesn’t make worldview questions irrelevant (there are lot of issues where the sketch above applies, and relative importance will be one of the ingredients that goes in the mix of divining which one to take), but it means I’m fairly sanguine about perennial uncertainty. My work is minuscule part of the already-highly-diversified corporate effort of humankind, and the tacit coordination strategy of people like me acting on our best guess of the optimal portfolio looks robustly good (a community like EA may allow better ones), even if (as I hope and somewhat expect) my own efforts transpire to have little value.
The reason I shouldn’t ‘hedge’ but Open Phil should is not so much because they can afford to (given they play with much larger stakes, better resolution on ‘worldview questions’ has much higher value to them than to I), but because the returns to specialisation are plausibly sigmoid over the ‘me to OP’ range. For individuals, there’s increasing marginal returns to specialisation: in the same way we lean against ‘donation splitting’ with money, so too with time (it seems misguided for me to spend—say − 30% on bio, 10% on AI, 20% on global health, etc.) A large funder (even though it still represents a minuscule fraction of the humanity-wide portfolio) may have overlapping marginal return curves between its top picks of (all things considered) most promising things to work on, and it is better placed to realise other ‘portfolio benefits’.
I have a paper I’ve been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/financial modern portfolio theory and economic collective action problem theory—but *someone* has me working on more important problems and papers instead… Gregory. ;)