I don’t think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe’s value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfare’s diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I’d actually expect that OP’s full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I’d echo Jeff’s suggestion that you should “top up” OP’s grants.
My main reason for trying to be mostly risk-neutral in my donations is that my donations are very small relative to the total size of the problem, while this is not the case for my personal investments. I would donate differently (more risk-averse) if I had control over a significant part of all charitable donations in a given area. In particular, I do not endorse double-or-nothing gambling on the fate of the universe.
You make a good point that OP is more likely to make judgements regarding small donation opportunities, so I’ll have to revise my position that small donors should specifically seek out smaller organizations to donate to. But the same argument for “topping up” OP donations could equally be made to support simply donating to an EA fund (which I expect will also take into account how their donations funge with OP).
I don’t think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe’s value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfare’s diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I’d actually expect that OP’s full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I’d echo Jeff’s suggestion that you should “top up” OP’s grants.
My main reason for trying to be mostly risk-neutral in my donations is that my donations are very small relative to the total size of the problem, while this is not the case for my personal investments. I would donate differently (more risk-averse) if I had control over a significant part of all charitable donations in a given area. In particular, I do not endorse double-or-nothing gambling on the fate of the universe.
You make a good point that OP is more likely to make judgements regarding small donation opportunities, so I’ll have to revise my position that small donors should specifically seek out smaller organizations to donate to. But the same argument for “topping up” OP donations could equally be made to support simply donating to an EA fund (which I expect will also take into account how their donations funge with OP).