It’s likely <100M, but it’s hard to tell because some funding programs blur the lines between outreach/infra and direct work, e.g. paying for someone’s PhD program.
EA outreach funding has likely generated substantially >>$1B in value, and
EA outreach is an area where we expect there to be significant lags to happen between spending and impact
For example, Sam Bankman-Fried graduated MIT 8 years ago[1].
Raising the saliency of our moral obligations and empirical worldviews on the ways to do good to future billionaires and future top researchers (or current billionaires and current top researchers) is by its very nature an extremely hits-based proposition.
If you’re only looking at a budget very loosely, it seems silly to complain about hundreds of thousands of dollars of spending when billions of dollars of foregone opportunities is on the line.
Now, if you’re looking at budgets in detail and investigating programs closely, I think it’s reasonable to be skeptical of some types of spending (e.g. if people are overpaid or eating overly fancy food or or not trying to save money on flights or whatever). It’s probably even more important to be skeptical of weak theories of change, or poor operational execution.
I think SBF donating to utilitarian/LT stuff is probably overdetermined, so not something that we can say EA outreach was useful for. However, I do not think this is true for everyone who’s extremely high impact. I think one of the strongest cruxes for value of EA outreach etc is whether or not “our top people” are drawn to EA without needing any active outreach. Current evidence suggests the outreach is pretty relevant.
It was a very quick lower bound. From the LT survey a few years ago, basically about ~50% of influences on quality-adjusted work in longtermism were from EA sources (as opposed to individual interests, idiosyncratic non-EA influences, etc), and of that slice, maybe half of that is due to things that look like EA outreach or infrastructure (as opposed to e.g. people hammering away at object-level priorities getting noticed).
And then I think about whether I’d a) rather all EAs except one disappear and have 4B more, or b) have 4B less but double the quality-adjusted number of people doing EA work. And I think the answer isn’t very close.
I think it’s easy to miss the forest for the trees. Unless I’ve missed something:
Before 2022, all of EA outreach/infrastructure funding in total have cost <<$200M,
It’s likely <100M, but it’s hard to tell because some funding programs blur the lines between outreach/infra and direct work, e.g. paying for someone’s PhD program.
Notably this is lower than Open Phil’s spending on criminal justice reform.
EA outreach funding has likely generated substantially >>$1B in value, and
EA outreach is an area where we expect there to be significant lags to happen between spending and impact
For example, Sam Bankman-Fried graduated MIT 8 years ago[1].
Raising the saliency of our moral obligations and empirical worldviews on the ways to do good to future billionaires and future top researchers (or current billionaires and current top researchers) is by its very nature an extremely hits-based proposition.
If you’re only looking at a budget very loosely, it seems silly to complain about hundreds of thousands of dollars of spending when billions of dollars of foregone opportunities is on the line.
Now, if you’re looking at budgets in detail and investigating programs closely, I think it’s reasonable to be skeptical of some types of spending (e.g. if people are overpaid or eating overly fancy food or or not trying to save money on flights or whatever). It’s probably even more important to be skeptical of weak theories of change, or poor operational execution.
I think SBF donating to utilitarian/LT stuff is probably overdetermined, so not something that we can say EA outreach was useful for. However, I do not think this is true for everyone who’s extremely high impact. I think one of the strongest cruxes for value of EA outreach etc is whether or not “our top people” are drawn to EA without needing any active outreach. Current evidence suggests the outreach is pretty relevant.
“EA outreach funding has likely generated substantially >>$1B in value”
Would be curious how you came up with that number.
It was a very quick lower bound. From the LT survey a few years ago, basically about ~50% of influences on quality-adjusted work in longtermism were from EA sources (as opposed to individual interests, idiosyncratic non-EA influences, etc), and of that slice, maybe half of that is due to things that look like EA outreach or infrastructure (as opposed to e.g. people hammering away at object-level priorities getting noticed).
And then I think about whether I’d a) rather all EAs except one disappear and have 4B more, or b) have 4B less but double the quality-adjusted number of people doing EA work. And I think the answer isn’t very close.