EA survey shows $6.75M total donations by EAs. 2352 self-identifying EAs were surveyed. Let’s say it costs $50K/yr per EA employed full time at a nonprofit. $6.75M divided by $50K is 135. 135 likely overestimates the number of people you’d employ on that budget, given overhead costs and the fact that many cause areas are going to have big expenses unrelated to employment (e.g. AMF has to pay for nets).
Given 2352 self-identifying EAs, if 135 are working for nonprofits full-time and 353 (15%) are optimizing their careers for earning to give, that would leave us with 1864 EAs doing self-supported impact-focused work (e.g. working as a researcher in a lab, working as a journalist, etc.)
Open Phil is a big wildcard. Dustin Moskovitz is worth on the order of ~$10 billion. He says he wants to give away his entire fortune in his lifetime. (Inside Philanthropy describes this as a “supertanker” of money… it’ll be interesting to see if/how the nonprofit world responds.) If he’s got 60 years left, that averages out to around $160M/yr. In reality it will likely be above $10B, if Facebook does well or he diversifies his portfolio and invests wisely. That’d be enough money to employ ~3000 EAs given the $50k/yr spending assumptions above. (But the EA movement is growing.)
I added up the grants described on Open Phil’s website. They’re on the order of $40M. Over the past 3 years, Open Phil has been giving away money at a rate of around $13M/yr. I suppose that rate of giving will gradually increase until they’re giving away money 10x as fast? If so, marginal earning to give could be more valuable in the near term than the long term? This could be an argument against e.g. going to grad school to build certain sorts of career capital. I’m also curious what sort of giving opportunities Open Phil is not willing to fund. It does seem like they’ve demonstrated past reluctance to fund weird causes that might hurt their brand, but that might be changing? And, how reluctant are they to be a charity’s primary or sole funder? (Alluding to Telofy’s comments elsewhere in this thread.)
Funding weird stuff should just be a branding/logistics exercise. Highly exploratory stuff gets put out of sight in an R&D lab like Google X and only the successes are shown off. This is valuable to the degree that there might be valuable interventions cloaked by What You Can’t Say.
Giving away only small amounts for now is consistent with the VoI being much higher in the initial exploratory phase than any actual object level outcome. The outside view says: most charitable efforts in the past have NOT consistently ratcheted towards effectiveness, but have, if anything, ratcheted towards uselessness. Understanding why is potentially worth billions given the existence of the giving pledge and the idea that EA type memes might heavily influence a substantial chunk of that money in the coming decades.
Relevant research questions might include:
How do we form excellent research teams?
How do we divvy up the search space among teams?
What sorts of search and synthesis heuristics should be considered best practice?
This direction or frame sort of hints at a furthering of the frame of EA as a leaking into the charity world the lessons and practices of the for profit world. Can we do lean/agile charity? If so, can we find/develop excellent teams for executing on some part of the search space of charity interventions? Can we give them seed funding and check results? etc.
Some napkin math:
EA survey shows $6.75M total donations by EAs. 2352 self-identifying EAs were surveyed. Let’s say it costs $50K/yr per EA employed full time at a nonprofit. $6.75M divided by $50K is 135. 135 likely overestimates the number of people you’d employ on that budget, given overhead costs and the fact that many cause areas are going to have big expenses unrelated to employment (e.g. AMF has to pay for nets).
Given 2352 self-identifying EAs, if 135 are working for nonprofits full-time and 353 (15%) are optimizing their careers for earning to give, that would leave us with 1864 EAs doing self-supported impact-focused work (e.g. working as a researcher in a lab, working as a journalist, etc.)
Open Phil is a big wildcard. Dustin Moskovitz is worth on the order of ~$10 billion. He says he wants to give away his entire fortune in his lifetime. (Inside Philanthropy describes this as a “supertanker” of money… it’ll be interesting to see if/how the nonprofit world responds.) If he’s got 60 years left, that averages out to around $160M/yr. In reality it will likely be above $10B, if Facebook does well or he diversifies his portfolio and invests wisely. That’d be enough money to employ ~3000 EAs given the $50k/yr spending assumptions above. (But the EA movement is growing.)
I added up the grants described on Open Phil’s website. They’re on the order of $40M. Over the past 3 years, Open Phil has been giving away money at a rate of around $13M/yr. I suppose that rate of giving will gradually increase until they’re giving away money 10x as fast? If so, marginal earning to give could be more valuable in the near term than the long term? This could be an argument against e.g. going to grad school to build certain sorts of career capital. I’m also curious what sort of giving opportunities Open Phil is not willing to fund. It does seem like they’ve demonstrated past reluctance to fund weird causes that might hurt their brand, but that might be changing? And, how reluctant are they to be a charity’s primary or sole funder? (Alluding to Telofy’s comments elsewhere in this thread.)
Funding weird stuff should just be a branding/logistics exercise. Highly exploratory stuff gets put out of sight in an R&D lab like Google X and only the successes are shown off. This is valuable to the degree that there might be valuable interventions cloaked by What You Can’t Say.
Giving away only small amounts for now is consistent with the VoI being much higher in the initial exploratory phase than any actual object level outcome. The outside view says: most charitable efforts in the past have NOT consistently ratcheted towards effectiveness, but have, if anything, ratcheted towards uselessness. Understanding why is potentially worth billions given the existence of the giving pledge and the idea that EA type memes might heavily influence a substantial chunk of that money in the coming decades.
Relevant research questions might include: How do we form excellent research teams? How do we divvy up the search space among teams? What sorts of search and synthesis heuristics should be considered best practice?
This direction or frame sort of hints at a furthering of the frame of EA as a leaking into the charity world the lessons and practices of the for profit world. Can we do lean/agile charity? If so, can we find/develop excellent teams for executing on some part of the search space of charity interventions? Can we give them seed funding and check results? etc.