Finally, I expect that my earmarking of grant funds will be partially funged within the GFI organization, and I think this is inevitable, basically fine, and in fact weakly good.
I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:
When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are inevitably incomplete. So if I overstated my certainty too much when translating my opinions into effects-on-the-world, I expect I would be making the allocation of resources less efficient overall. If I insisted on absolutely no counterfactual funging, I would be overstating my confidence.
On the other hand, if I trust GFI to take my grants in the spirit that they’re intended, then I expect they’ll take them as information given in good faith, trust that I was trying to communicate something that I thought was not already known to them, consider what things they know that (they think) were not known to me, and decide what the net effect of my additional opinion should be. (This should remind you of Aumann’s agreement theorem, if you’re familiar with that concept from the rationality literature.)
(I think it’s also plausible in general that earmarking $X in a vote of confidence in a particular program prompts the receiving organization to update their beliefs and direct more non-earmarked funding than they would have otherwise, causing the opposite of funging.)
Do I actually believe that GFI’s principals are as good at playing this Aumann-esque information-aggregation game as the professional colleagues I’m used to working with? Probably not, no. But this is the way I think cooperative allocation of resources should play out, and I think that the EA community only gets better at it if we start discussing ideas like this and playing “cooperate” in the epistemic prisoners’ dilemma. And my instinct is actually that if some of my funding ends up being funged towards initiatives that GFI principals think are highest-value, it’s probably net good for the overall work.
I received a private request (from an early reviewer of this post) to expand on my thoughts here, so a few more words:
When making decisions under collective uncertainty, aggregating information is a hard problem (citation not required). I think that my relative opinions here push the world towards a more efficient allocation, but I recognize that my opinions about GFI are inevitably incomplete. So if I overstated my certainty too much when translating my opinions into effects-on-the-world, I expect I would be making the allocation of resources less efficient overall. If I insisted on absolutely no counterfactual funging, I would be overstating my confidence.
On the other hand, if I trust GFI to take my grants in the spirit that they’re intended, then I expect they’ll take them as information given in good faith, trust that I was trying to communicate something that I thought was not already known to them, consider what things they know that (they think) were not known to me, and decide what the net effect of my additional opinion should be. (This should remind you of Aumann’s agreement theorem, if you’re familiar with that concept from the rationality literature.)
(I think it’s also plausible in general that earmarking $X in a vote of confidence in a particular program prompts the receiving organization to update their beliefs and direct more non-earmarked funding than they would have otherwise, causing the opposite of funging.)
Do I actually believe that GFI’s principals are as good at playing this Aumann-esque information-aggregation game as the professional colleagues I’m used to working with? Probably not, no. But this is the way I think cooperative allocation of resources should play out, and I think that the EA community only gets better at it if we start discussing ideas like this and playing “cooperate” in the epistemic prisoners’ dilemma. And my instinct is actually that if some of my funding ends up being funged towards initiatives that GFI principals think are highest-value, it’s probably net good for the overall work.