If youâre Open Phil, you can hedge yourself against the risk that your worldview might be wrong by diversifying. But the rest of us are just going to have to figure out which worldview is actually right.
Minor/âMeta aside: I donât think âhedgingâ or diversification is the best way to look at this, whether one is an individual or a mega-funder.
On standard consequentialist doctrine, one wants to weigh things up âfrom the point of view of the universeâ, and be indifferent as to âwho is doing the workâ. Given this, it looks better to act in the way which best rebalances the humanity-wide portfolio of moral effort, rather than a more narrow optimisation of âthe EA communityâ, âOPs grantsâ, or ones own effort.
This rephrases the âneglectednessâ consideration. Yet I think people donât often think enough about conditioning on the current humanity-wide portfolio, or see their effort as being a part of this wider whole, and this can mislead into moral paralysis (and, perhaps, insufficient extremising). If I have to âdecide what worldview is actually rightâ, Iâm screwed: many of my uncertainties Iâd expect to be resilient to a lifetime of careful study. Yet I have better prospects of reasonably believing that âThis issue is credibly important enough that (all things considered, pace all relevant uncertainties) in an ideal world humankind would address X people to work on thisâgiven in fact there are Y, Y << X, perhaps I should be amongst them.â
This is a better sketch for why I work on longtermism, rather than overall confidence in my âlongtermist worldviewâ. This doesnât make worldview questions irrelevant (there are lot of issues where the sketch above applies, and relative importance will be one of the ingredients that goes in the mix of divining which one to take), but it means Iâm fairly sanguine about perennial uncertainty. My work is minuscule part of the already-highly-diversified corporate effort of humankind, and the tacit coordination strategy of people like me acting on our best guess of the optimal portfolio looks robustly good (a community like EA may allow better ones), even if (as I hope and somewhat expect) my own efforts transpire to have little value.
The reason I shouldnât âhedgeâ but Open Phil should is not so much because they can afford to (given they play with much larger stakes, better resolution on âworldview questionsâ has much higher value to them than to I), but because the returns to specialisation are plausibly sigmoid over the âme to OPâ range. For individuals, thereâs increasing marginal returns to specialisation: in the same way we lean against âdonation splittingâ with money, so too with time (it seems misguided for me to spendâsay â 30% on bio, 10% on AI, 20% on global health, etc.) A large funder (even though it still represents a minuscule fraction of the humanity-wide portfolio) may have overlapping marginal return curves between its top picks of (all things considered) most promising things to work on, and it is better placed to realise other âportfolio benefitsâ.
I have a paper Iâve been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/âfinancial modern portfolio theory and economic collective action problem theoryâbut *someone* has me working on more important problems and papers instead⌠Gregory. ;)
Minor/âMeta aside: I donât think âhedgingâ or diversification is the best way to look at this, whether one is an individual or a mega-funder.
On standard consequentialist doctrine, one wants to weigh things up âfrom the point of view of the universeâ, and be indifferent as to âwho is doing the workâ. Given this, it looks better to act in the way which best rebalances the humanity-wide portfolio of moral effort, rather than a more narrow optimisation of âthe EA communityâ, âOPs grantsâ, or ones own effort.
This rephrases the âneglectednessâ consideration. Yet I think people donât often think enough about conditioning on the current humanity-wide portfolio, or see their effort as being a part of this wider whole, and this can mislead into moral paralysis (and, perhaps, insufficient extremising). If I have to âdecide what worldview is actually rightâ, Iâm screwed: many of my uncertainties Iâd expect to be resilient to a lifetime of careful study. Yet I have better prospects of reasonably believing that âThis issue is credibly important enough that (all things considered, pace all relevant uncertainties) in an ideal world humankind would address X people to work on thisâgiven in fact there are Y, Y << X, perhaps I should be amongst them.â
This is a better sketch for why I work on longtermism, rather than overall confidence in my âlongtermist worldviewâ. This doesnât make worldview questions irrelevant (there are lot of issues where the sketch above applies, and relative importance will be one of the ingredients that goes in the mix of divining which one to take), but it means Iâm fairly sanguine about perennial uncertainty. My work is minuscule part of the already-highly-diversified corporate effort of humankind, and the tacit coordination strategy of people like me acting on our best guess of the optimal portfolio looks robustly good (a community like EA may allow better ones), even if (as I hope and somewhat expect) my own efforts transpire to have little value.
The reason I shouldnât âhedgeâ but Open Phil should is not so much because they can afford to (given they play with much larger stakes, better resolution on âworldview questionsâ has much higher value to them than to I), but because the returns to specialisation are plausibly sigmoid over the âme to OPâ range. For individuals, thereâs increasing marginal returns to specialisation: in the same way we lean against âdonation splittingâ with money, so too with time (it seems misguided for me to spendâsay â 30% on bio, 10% on AI, 20% on global health, etc.) A large funder (even though it still represents a minuscule fraction of the humanity-wide portfolio) may have overlapping marginal return curves between its top picks of (all things considered) most promising things to work on, and it is better placed to realise other âportfolio benefitsâ.
I have a paper Iâve been kicking around in the back of my head for a couple years to formalize essentially this idea via economic/âfinancial modern portfolio theory and economic collective action problem theoryâbut *someone* has me working on more important problems and papers instead⌠Gregory. ;)