Thanks for the post. This has changed my mind a bit... I’m a particularly attracted to the argument in section 2 and the “Other Benefits...” above. What got me thinking, though, is your section 3 is sound… I think I miss (and likely other small donors) a more detailed framework to deal with informational costs.
First, I feel psychologically attracted to the idea large donors play the “angel investor” or VC role, while small investors are often drawn to “safe portfolios” with lower variance / risk… On the other hand, the analogy shouldn’t be applicable: I don’t measure my returns in philanthropy the same way I do with personal investments, and I think there’s no case for something like an EMH in philanthropy, so I could deal with a risky portfolio – that’s why I’m pretty ok with donating to longtermist causes. The real problem is uncertainty: I won’t regularly donate to a cause / project that I can’t quite understand, or where it’s impossible to learn or observe improvements, even though it may score high in preliminary ITN-like CBA. But if there’s someone else I can trust vetting it, I can be OK with that.
Now, the case where I might have something like “private information” on the impact of a project—the “support people you know” advice—is the interesting one. A detour: this reminds me a friend of mine who, instead of using financial markets like everyone else, would provide loans to acquaintances with stable jobs and high income, and make a lot of money with that – since he could sidestep the information asymmetry plaguing banks. But, eventually, a friend would default, and now he had some trouble collecting the money… it was no tragedy, but he realized he’d neglected social costs, biases, and that he wasn’t so great at screening… Thus I imagine that, if I wanted to fund a grant to a skilled independent researcher I know, or to a new EA group, I’d be in an analogous situation. Thus, even if I were pretty confident these projects are great and underfunded, I’d still want some sort of professional external opinion vetting it – maybe even want to totally outsource this decision, so avoiding the social cost of having to discontinue funding if the evidence ends up requiring it. And, of course, this kind of applies to personal projects—even if you know better than anyone else what you could do, you could be particularly bad at deciding when to stop.
I think there could be some way to solve / mitigate this issue—maybe having a group of small donors interested in providing advice, or funding each other’s “support people you know” projects, so you could have an external opinion on it, dillute and cap risks, and have an excuse to cut the funding… But that’s just what popped in my brain now.
Thanks for the post. This has changed my mind a bit...
I’m a particularly attracted to the argument in section 2 and the “Other Benefits...” above. What got me thinking, though, is your section 3 is sound… I think I miss (and likely other small donors) a more detailed framework to deal with informational costs.
First, I feel psychologically attracted to the idea large donors play the “angel investor” or VC role, while small investors are often drawn to “safe portfolios” with lower variance / risk… On the other hand, the analogy shouldn’t be applicable: I don’t measure my returns in philanthropy the same way I do with personal investments, and I think there’s no case for something like an EMH in philanthropy, so I could deal with a risky portfolio – that’s why I’m pretty ok with donating to longtermist causes. The real problem is uncertainty: I won’t regularly donate to a cause / project that I can’t quite understand, or where it’s impossible to learn or observe improvements, even though it may score high in preliminary ITN-like CBA. But if there’s someone else I can trust vetting it, I can be OK with that.
Now, the case where I might have something like “private information” on the impact of a project—the “support people you know” advice—is the interesting one. A detour: this reminds me a friend of mine who, instead of using financial markets like everyone else, would provide loans to acquaintances with stable jobs and high income, and make a lot of money with that – since he could sidestep the information asymmetry plaguing banks. But, eventually, a friend would default, and now he had some trouble collecting the money… it was no tragedy, but he realized he’d neglected social costs, biases, and that he wasn’t so great at screening… Thus I imagine that, if I wanted to fund a grant to a skilled independent researcher I know, or to a new EA group, I’d be in an analogous situation. Thus, even if I were pretty confident these projects are great and underfunded, I’d still want some sort of professional external opinion vetting it – maybe even want to totally outsource this decision, so avoiding the social cost of having to discontinue funding if the evidence ends up requiring it. And, of course, this kind of applies to personal projects—even if you know better than anyone else what you could do, you could be particularly bad at deciding when to stop.
I think there could be some way to solve / mitigate this issue—maybe having a group of small donors interested in providing advice, or funding each other’s “support people you know” projects, so you could have an external opinion on it, dillute and cap risks, and have an excuse to cut the funding… But that’s just what popped in my brain now.