It seems that OpenPhil wants a more satisfactory answer to moral uncertainty than just worldview diversification before ramping up the amount of grants per year. Is this part of why you are hiring new Research Analysts, and if so, how much will they work on this problem? (This seems like a very interesting but hard problem)
We could certainly imagine ramping up grantmaking without a much better answer. As an institution we’re often happy to go with a “hacky” approach that is suboptimal, but captures most of the value available under multiple different assumptions.
If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we’ll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don’t put much more time into researching or reflecting on moral uncertainty.
Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.
I’m very curious about how that improved understanding would come about via grantmaking. Any write-up you have about this? I can see how you’d learn about tractability, and maybe about neglectedness, but I wonder how you incorporate this in your decision-making.
Anyway, this might go a little too off-topic so I’d understand if you replied to other questions first :)
It seems that OpenPhil wants a more satisfactory answer to moral uncertainty than just worldview diversification before ramping up the amount of grants per year. Is this part of why you are hiring new Research Analysts, and if so, how much will they work on this problem? (This seems like a very interesting but hard problem)
We could certainly imagine ramping up grantmaking without a much better answer. As an institution we’re often happy to go with a “hacky” approach that is suboptimal, but captures most of the value available under multiple different assumptions.
If someone at Open Phil has an idea for how to make useful progress on this kind of question in a reasonable amount of time, we’ll very likely find that worthwhile and go forward. But there are lots of other things for Research Analysts to work on even if we don’t put much more time into researching or reflecting on moral uncertainty.
Also note that we may pursue an improved understanding via grantmaking rather than via researching the question ourselves.
I’m very curious about how that improved understanding would come about via grantmaking. Any write-up you have about this? I can see how you’d learn about tractability, and maybe about neglectedness, but I wonder how you incorporate this in your decision-making.
Anyway, this might go a little too off-topic so I’d understand if you replied to other questions first :)
I’m referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Allocating_capital_to_buckets_and_causes , which may have different consequences for how much ought to be allocated to each bucket)