Pronouns: he/âhim
Leave me anonymous feedback: https://ââdocs.google.com/ââforms/ââd/ââe/ââ1FAIpQLScB5R4UAnW_k6LiYnFWHHBncs4w1zsfpjgeRGGvNbm-266X4w/ââviewform
Contact me at: johnmichaelbridge[at]gmail[dot]com
Epistemic status: Uncertain and speculative. I try not to caveat my claims too much because it makes everything harder to read. If Iâve worded something too strongly, feel free to ask for clarification.
One of the reasons I no longer donate to EA Funds so often is that I think their funds lack a clearly stated theory of change.
For example, with the Global Health and Development fund, Iâm confused why EAF hasnât updated at all in favour of growth-promoting systemic change like liberal market reforms. It seems like there is strong evidence that economic growth is a key driver of welfare, but the fund hasnât explained publicly why it prefers one-shot health interventions like bednets. It may well have good reasons for this, but there is absolutely no literature explaining the fundâs position.
The LTFF has a similar problem, insofar as it largely funds researchers doing obscure AI Safety work. Nowhere does the fund openly state: âwe believe one of the most effective ways to promote long term human flourishing is to support high quality academic research in the field of AI Safety, both for the purposes of sustainable field-building and in order to increase our knowledge of how to make sure increasingly advanced AI systems are safe and beneficial to humanity.â Instead, donors are basically left to infer this theory of change from the grants themselves.
I donât think we can expect to drastically increase the take-up of funds without this sort of transparency. Iâm sure the fund managers have thought about this privately, and that they have justifications for not making their thoughts public, but asking people to pour thousands of pounds/âdollars a year into a black box is a very, very big ask.