The funding conversation we left unfinished
People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven’t started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see.
It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around.
CitizenTen, in “The Vultures Are Circling” (April 2022), puts it this way:
The message is out. There’s easy money to be had. And the vultures are coming. On many internet circles, there’s been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I’m not even an EA, but I can pretend, as getting a 10k grant is a good instrumental goal towards [insert-poor-life-goals-here]” Or, “Did you hear that a 16 year old got x amount of money? That’s ridiculous! I thought EA’s were supposed to be effective!” Or, “All you have to do is mouth the words community building and you get thrown bags of money.”
Basically, the sharp increase in rewards has led the number of people who are optimizing for the wrong thing to go up. Hello Goodhart. Instead of the intrinsically motivated EA, we’re beginning to get the resume padders, the career optimizers, and the type of person that cheats on the entry test for preschool in the hopes of getting their child into a better college. I’ve already heard of discord servers springing up centered around gaming the admission process for grants. And it’s not without reason. The Atlas Fellowship is offering a 50k, no strings attached scholarship. If you want people to throw out any hesitation around cheating the system, having a carrot that’s larger than most adult’s yearly income will do that.
Other highly upvoted posts from that era:
I feel anxious that there is all this money around. Let’s talk about it—Nathan Young, March 2022
Free-spending EA might be a big problem for optics and epistemics—George Rosenfield, April 2022
EA and the current funding situation—Will MacAskill, May 2022
The biggest risk of free-spending EA is not optics or motivated cognition, but grift—Ben Kuhn, May 2022
Bad Omens in Current Community Building—Theo Hawking, May 2022
The EA movement’s values are drifting. You’re allowed to stay put. - Marisa, May 2022
I wish FTX hadn’t done fraud and collapsed for many reasons, but one feels especially salient currently: we never finished processing how abundant funding impacts a high-trust altruistic community. The conversation had barely started.
I would say that I’m worried about these dynamics emerging again, but there’s something a little more complicated here. Ozy actually calls out a similar strand of dysfunction in (parts of) EA in early 2024:
Effective altruist culture ought to be about spending resources in the most efficient way possible to do good. Sure, sometimes the most efficient way to spend resources to do good doesn’t look frugal. I’ve long advocated for effective altruist charities paying their workers well more than average for nonprofits. And a wise investor might make 99 bets that don’t pay off to get one that pays big. But effective altruist culture should have a laser focus on getting the most we can out of every single dollar, because dollars are denominated in lives.
...
It’s cool and high-status to travel the world. It’s cool and high-status to go on adventures. It’s cool and high-status to spend time with famous and influential people. And, God help us, it’s cool and high-status to save the world.I think something like this is the root of a lot of discomfort with showy effective altruist spending. It’s not that yachting is expensive. It’s that if your idea of what effective altruists should be doing is yachting, a reasonable person might worry that you’ve lost the plot.
So these dynamics are not “emerging again”. They haven’t left. And I’m worried that they might get turbocharged when money comes knocking again.
Sure, seems plausible.
I guess I kind of like @William_MacAskill’s piece or as much as I remember of it.
My recollection is roughly this:
Yes, it’s strange to have lots more money.
Perhaps we’re spending it badly.
But also seeking not to spend enough money might be a bad thing, too.
Frugal EA had something to recommend it.
But more impact probably requires more resources.
This seems good, though I guess it feels like a missing piece is:
Are we sure this money is got ethically?
How much harm will getting this money for bad reasons hurt us?
Also, looking back @trammell’s takes have aged very well:
It is unlikely we are in the most important time in history
If not, it is good to save money for that time
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
So my non-EA friends point out that EAs have incentives to suck up to any group that are about to become rich. This seems something which I haven’t seen a solid path through:
It is much more effective to deal with the people who have the most money.
It is hard to retain one’s virtue while doing so.
Having known, and had conflict with a number of wealthy people, it is hard to retain ones sense of integrity in the face of lifechanging funds. I’ve talked to SBF and even after the crash I felt a gravity that I didn’t want to insult him lest he one day return to the heights of his influence. Sometimes that made me too cautious, sometimes, avoiding caution I was reckless.
I guess in some sense the problem is that finding ways through uncomfortable situations requires sitting in discomfort, and I don’t find EA to have a lot of internal battery for that kind of thing. Have we really resolved most of the various crises in a way that created harmony between those who disagreed? I’m not sure we have. So it’s hard to be optimistic here.
My understanding of what happened is different:
Not that much of the FTX FF money was ever awarded (~$150-200million, details).
A lot of the FTX Future Fund money could have been clawed back (I’m not sure how often this actually happened) – especially if it was unspent.
It was sometimes voluntarily returned by EA organisations (e.g. BERI) or paid back as part of a settlement (e.g. Effective Ventures).
And some of the FTXFF monies went to entities with no clear connection to the EA community, especially bioscience firms. Several of the bigger recipients on the list Tobias linked fall into that category.
Unless you explicitly warn your donors that you’re going to sit on their money and do nothing with it, you might anger them by employing this strategy, such that they won’t donate to you again. (I don’t know if SBF would have noticed or cared because he couldn’t even sit through a meeting or an interview without playing a video game, but what applies to SBF doesn’t apply to most large donors.)
Also, if there is a most important time in history, and if we can ever know we’re in the most important time in history while we’re in it, it might be 100 years or 1,000 years from now, and obviously holding onto money that long is a silly strategy. (Especially if you think we’re going to start having 10% economic growth within 50 years due to AI, but even if you don’t.)
As a donor, I want to donate to charities that can “beat the market” in terms of their impact, i.e., the impact they create by spending the money now is big enough that it is bigger than the effects of investing the money and spending it in 5 years. I would be furious if I found out the charities I donate to were employing the invest-and-wait strategy. I can invest my own money or give it to someone who will spend it.
I don’t think trying to invest for a long time is obviously a silly strategy. But I agree that people or groups of people should decide for themselves whether they want to try to do that with their money, and a charity fundraising this year would be betraying their donors’ trust if their plan was actually to invest it for a long time.
My intuition about patient philanthropy is this: if I have $1 million that I can spend philanthropically now or I can invest it for 100 years at a 7% CAGR and grow it to $868 million in 2126, I think spending the $1 million in 2026 will have a bigger, better impact than the $868 million in 2126.
Gross world product per capita (PPP) is around $24,000 now. It’s forecasted to grow at 2% a year. At 2% a year for 100 years, it will be $174,000 in 2126. So, the world on average will be much wealthier than the wealthiest nations today. The U.S. GDP per capita (PPP) is $90,000, Norway’s is $107,000 — I’m ignoring tax havens with distorted stats.
Why should the poor people of today give to the rich people of the future? How is that cost-effective?
The difference between the GiveWell estimate of the cost to save a life and the estimated statistical cost of saving a life in the U.S. is $3,500 vs. $9 million, so a ~2,500x difference. $1 million now could save 285 lives. $868 million in 2126 could save 96 lives — if we think poorer countries will have catch-up growth that brings them up to $90,000+ in GDP per capita (PPP).
The poorest countries may not have catch-up growth, and may not even grow commensurately with the world on average, but in that case, it makes it even more important to spend the $1 million on the poorest countries now to try to make sure that growth happens. Stimulating economic growth in sub-Saharan African countries where growth has been stagnant may be one of the most important global moral priorities. Thinking about 100 years in the future only makes it feel more urgent, if anything.
Plus, the risk that a foundation trying to invest money for 100 years doesn’t make it to 2126 seems high.
If you factor in the possibility of transformative technologies like much more advanced AI and robotics, biotech, and so on, and/or the possibility much faster per capita economic growth over the next 100 years, the case for spending now rather than waiting a century gets even stronger.
If they haven’t exhibited catch-up growth by 2126, I expect $868 million then is more likely to trigger it than $1 million today.
But the opportunity cost of not spending the $1 million today is surely much more than $867 million?
That depends on how long it would have stayed poor without the intervention!
Didn’t you stipulate it would be at least 100 years in the scenario we’re imagining? Surely it’s worth spending at least 1,000x more resources to end global poverty 100 years sooner? (Otherwise, why not wait 1,000 years or 10,000 years to donate your first dollar to global poverty?)
The returns certainly aren’t all that matter.
I don’t follow your questions. We’re comparing spending now to induce some chance of growth starting now with spending later to induce some chance of growth starting later, right? To make the scenario precise, say
The country is currently stagnant, and its people collectively enjoy “1 util per year”. Absent your intervention, it will stay stagnant for 200y.
Spending $1m now has a 1% chance of kicking off catch-up growth.
Investing it for 100y before spending has a 4% chance of kicking off catch-up growth then (because $868m>>$1m). The money won’t be lost in the meantime (or, we can say that the chance it gets lost is incorporated into the 4%).
In either case, the catch-up will be immediate and bring them to a state where they permanently collectively enjoy “2 utils per year”.
In this case, the expected utility produced by spending now is 1%x(2-1)x200 = 2 utils.
The expected utility produced by spending in 100y is 4%x(2-1)x100 = 4 utils.
The gap can be arbitrarily large if we imagine that the default is stagnation for a longer period of time than 200y (or arbitrarily negative if we imagine that it was close to 100y), and this is true regardless of how much money for the beneficiaries is producing the gap between the 2 utils and the 1 util.
Do you really, actually, in practice, recommend that everyone in the world delays all spending on global poverty/global health for 100+ years? As in, the Against Malaria Foundation should stop procuring anti-malarial bednets and just invest all its funds in the Vanguard FTSE Global All Cap Index Fund instead? Partners in Health should wind down its hospitals and become a repository for index funds? If not, why not?
With the closest thing we have to real numbers (that I’ve been able to figure out, so far, anyway), my back-of-the-envelope calculation above found that it was ~3x as cost-effective to donate money now than to invest and wait 100 years. Do you find that rough math at all convincing?
I don’t know how to quantify the economic growth question with anything approaching real numbers. It would probably be a back-of-the-envelope calculation with a lot more steps and a lot more uncertainty than even the non-rigorous calculation I did above. There are many complicated considerations that can’t be mathematically modelled. For example: if wealthy people and organizations in wealthy countries have ~1,000x more resources in 100 years, it seems like the marginal cost-effectiveness of any one patient philanthropic foundation on global poverty would decline commensurately, since, all else being equal, you’d think overall giving to global poverty would increase ~1,000x.
If you think there’s at least an, I don’t know, 5% chance of transformative AI within the next 100 years, that also changes things. Because transformative AI would cause rapid economic growth all over the planet, and then the marginal cost-effectiveness of your philanthropic funds in 2126 will really have decreased. But of course the invention of transformative AI are impossible to forecast.
No: I think that people should delay spending on global poverty/health on the current margin, not that optimal total global poverty/health spending today would be 0.
But that’s a big question, and I thought we were just trying to make progress on it by focusing one one narrow angle here: namely whether or not it is in some sense “at least 1,000x better to stimulate faster economic growth in the poorest countries today than it is to do it 100 years from now”. I think that, conditional on a country not having caught up in 100 years, there’s a decent chance it will still not have caught up in 200 years; and that in this case, when one thinks it through, initiating catch-up in 100 years is at least half as good as doing so today, more or less.
Since we have no real numbers for that narrow angle and it involves important factors we can’t mathematically model, I don’t know if we can settle that narrow question.
But what about the other narrow question: that if you assume the poorest countries will grow to ~50% of the per capita GDP of the per capita GWP in 100 years, which we assume will continue to grow by 2% over that timespan, the cost-effectiveness of saving a life by donating to GiveWell’s top charities today is ~3x higher than investing for 100 years and giving in 2126?
Interesting, say more about how you see EA struggling or failing to sit in discomfort?
I think the other missing piece is “what will this money do to the community fabric, what are the trade-offs we can take to make the community fabric more resilient and robust, and are those trade-offs worth it?”
When it comes to funding effective charities, I agree that having more money is straightforwardly good. It’s the second-order effects on the community (the current people in it and what might make them leave, the kinds of people who are more likely to become new entrants) that I’m more concerned with.
I anticipate that the rationalists would have to face a similar problem but to a lesser degree, since the idea that well-kept gardens die by pacifism is more in the water there, and they are more ambivalent about scaling the community. But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
I’ll just note that when the original conversation started, I addressed this in a few parts.
To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.
A basic issue with a lot of deliberate philanthropy is the tension between:
In many domains, much of the biggest gains are likely to come from marginal opportunities. E.g. because they have more value of information, more large upsides, more addressing neglected areas (and therefore plausibly strategically important.
Marginal opportunities are harder to evaluate.
There’s less preexisting understanding, on the part of fund allocators.
The people applying would tend to be less tested.
Therefore, it’s easier to game.
The kneejerk solution I’d propose is “proof of novel work”. If you want funding to do X, you should show that you’ve done something to address X that others haven’t done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn’t necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I’m curious where it doesn’t work. Also curious what else has been tried. (E.g. many organizations do “don’t apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, …}”.)
Thanks for restarting this conversation!
Relatedly, it’s also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with—or at least are consonant with—their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there’s not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors’ personal financial interests could bias the community’s actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But—one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don’t write posts and comments in support of stop/pause advocacy because they don’t want to irritate the new funders. Maybe grantmakers don’t recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There’s also a risk of losing public credibility—it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Orgs could commit to identifying whether, and how much, of their funding comes from significantly AI-involved sources.
Many orgs could have a limit on the percentage of their budget they will accept from significantly AI-involved sources. Some orgs—those with particular sensitivity on AI knowledge and policy—should probably avoid any major gifts from AI-involved sources at all.
Particularly sensitive orgs could be granted extended runways and/or funding agreements with some sort of independent protection against non-renewal.
Other donors could provide more funding for red-teaming AI work, especially work that potentially affected AI-involved source donors.
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
I find so much EA analysis, in general, to be too clever by half. (Per Wiktionary: “Shrewd but flawed by overthinking or excessive complexity, with a resulting tendency to be unreliable or unsuccessful.”) So many conversations like this could be helped along by just having a simpler and more commonsense analysis. Does EA need to have a big conversation right now about how to handle it if EA suddenly gets tons of money? Probably not.
Expecting the money to come in sounds like wishful thinking. Even if there are Anthropic billionaires with liquidity in 2026 or 2027 (which is not guaranteed to happen), even if these billionaires are influenced by EA and want to give money to some of the same charities or cause areas as people in EA cares about, who says the money is going to flow through the EA community/movement? If I were an Anthropic billionaire, rather than trying to be Sam Bankman-Fried 2.0 and just spraying a firehose of money at the EA community generally, I would pick the charities I want to donate to and give to them directly.
Besides Sam Bankman-Fried, the other billionaires who have donated to EA-related charities and causes like Dustin Moskovitz/Cari Tuna and Jaan Tallinn have completely managed their own giving. Sam Bankman-Fried’s behaviour in general was impulsive and chaotic — it seems like his financial crimes were less likely to be rational calculation and seem more like poor impulse control or a general disinhibition, as crime often is — and the way he gave money to EA seems like an extension of that. A more careful person probably wouldn’t do it that way. They would probably start a private foundation, hire some people to help manage it, and run it quietly out of the public view. Maybe they would take the unusual step of starting something like Open Philanthropy/Coefficient Giving and do their giving in a more public-facing way. But even so, this is still under their control and not the EA community’s control.
If some Anthropic billionaire does just back a truck full of money up to the EA community, that’s a good problem to have, and that’s the sort of problem you can digest and adapt to as it starts happening. You don’t need to invest a lot of your limited resources of time, energy, and attention to it 6 months to 3 years in advance, when it’s not actually clear it will ever happen at all. (This isn’t an asteroid, you don’t need to fret about long-tail risk.)
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? It’s the second part that I’m doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and that’s hard to predict. Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
You’re probably doubting this because you don’t think it’s a good way to spend money. But that doesn’t mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/community building.
If there is an influx of money into ‘that sort of thing’ in 2026/2027, I’d expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Different in what ways? Edit: You kind of answered this in your edit, but what I’m getting at is: SBF’s giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
I’m also thinking that Daniela Amodei said this about effective altruism earlier this year:
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
She’s gonna give her money to meta-EA?
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
That’s a really good point!
I guess my next thought is: are we worried about Holden Karnofsky corrupting effective altruism? Because if so, I have bad news…
I’ll bet you a $10 donation to the charity of your/my choice that by December 31, 2026, not all three of these things will be true:
Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.
More than $100 million in new money (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund (managed at least in part by people with active existing roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofsky’s historical role in the EA movement and EA-related grantmaking, I’m excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.)
A survey of least ten representative and impartial EA Forum users will find that more than 50% believe it’s likely that this very EA Forum post on which we’re commenting non-trivially reduced the amount of corruption relating to that $200+ million in a way that can not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is likely.
I think that at least one and possibly two or all three of these things won’t be true by December 31, 2026. If at least one of them isn’t true, I win the bet. If all three are true, you win the bet.
My thought process is vaguely, hazily something like this:
There’s a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, there’s a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/centimillionaires backing up a truck full of money to meta-EA and EA funds, there’s a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/energy/attention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I don’t mean them literally. This is for illustrative purposes only.
But the overall point is that it’s like the Swiss cheese model of risk where three things have to go “wrong” for a problem to occur. But in this case, the thing that would go “wrong” is getting a lot of money, which has happened before with SBF’s chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadn’t done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/Cari Tuna or Jann Tallinn), I don’t think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesn’t seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% — or whatever it is — chance it happens doesn’t apply. It’s a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I don’t think they make people less corruptible.
Hey, thanks for this comment. To be clearer about my precise model, I don’t expect there to be new Anthropic billionaires or centimillionaires. Instead, I’m expecting dozens (or perhaps low hundreds) of software engineers who can afford to donate high six to low seven figure amounts per year.
Per levels.fyi, here is what anthropic comp might look like:
And employees who joined the firm early often had agreements of 3:1 donations matching for equity (that is, Anthropic would donate $3 for every $1 that the employee donates). My understanding is that Anthropic had perks like this specifically to try to recruit more altruistic-minded people, like EAs.
Further, other regrantors in the space agree that a lot more donations are coming.
(Also note that Austin is expecting 1-2 more OOMs of funding than me. He is also much more plugged into the actual scene.)
Here’s what historical data of EA grantmaking has looked like:
I anticipate that the new funds pouring in to specifically the EA ecosystem will not be at the scale of another OpenPhil (disbursing 500m+ per year), but there’s a small chance it might match the scale of GiveWell (~disbursing 200m per year; much focused on meta EA, x-risk, and longtermist goals than GW), and I would be very surprised if it fails to match SFF’s scale (disbursing ~30m a year) by the end of 2026.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWell’s recommended charities, or existing charities like the Future of Life Institute? If it’s a low percentage, the conversation seems moot.
I don’t know how much new funding Austin Chen is expecting.
My expectations are not contingent on Anthropic IPOing, and presumably neither is Austin’s. Employees are paid partially in equity, so some amount of financial engineering will be done to allow them to cash out, whether or not an IPO is happening.
I expect that, as these new donors are people working in the AI industry, a significant percentage is going to go into the broader EA community and not directly to GW. Double digit percentage for sure, but pretty wide CI.
And funny you should mention FLI, they specifically say they do not accept funding from “Big Tech” and AI companies so I’m not sure where that leaves them.
They are also a fairly small non-profit and I think they would struggle to productively use significantly more funding in the short term. Scaling takes time and effort.
Appreciate the shoutout! Some thoughts:
Anthropic’s been lately valued at $350b; if we estimate that eg 6% of that is in the form of equity allocated to employees, that’s $21B between the ~3000 they currently have, or an average of $7m/employee.
I think 6% is somewhat conservative and wouldn’t be surprised if it were more like 12-20%
Early employees have much (OOMs) more equity than new hires. Here’s one estimated generated by Claude and I:
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, that’s still mindboggling amounts of money. I’d guess that this is more like “10 new OpenPhils in the next 2-6 years”
I heard about the IPO rumors at the same time as everyone else (ie very recently), but for the last 6 months or so, the expectation was that Anthropic might have a ~yearly liquidity event, where Anthropic or some other buyer buys back employee stock up to some cap ($2m was thrown around as a figure)
As reported in other places, early Anthropic employees were offered a 3:1 match of donations of equity, iirc up to 50% of their total stock grant? New employees now are offered 1:1 match, but the 3:1 holds for the early ones (though not cofounders)
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so what’s your figure for Open Philanthropy/Coefficient Giving? Dustin Moskovitz’s net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that’s at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, that’s $10.5 billion. If they donate 50% of it to EA-related charities, that’s around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didn’t I understand? What did I miss?
I think this pledge is over their lifetime, not over the next 2-6 years. OP seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin’s time frame.
But if it’s $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employees’ lifetime net worth — as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, that’s around $5 billion over the next 2-6 years.
If Open Philanthropy/Coefficient Giving is doing $1 billion a year like you said, that’s around one OP/CG, not ten.
If OP/CG is really spending $1 billion/year, then OP/CG must have a lot more donations coming in from people other than Dustin Moskovitz or Cari Tuna than I realized. Either that or they’re spending down their fortune much faster than I thought.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/selling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel they’re still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesn’t identify with effective altruism), but I don’t understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
these are good questions and points. i have answers and explanations such that the points you raise do not particularly change my mind, but i feel aversion towards explaining them on a public forum. thanks for understanding.