My thought process is vaguely, hazily something like this:
There’s a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, there’s a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/centimillionaires backing up a truck full of money to meta-EA and EA funds, there’s a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/energy/attention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I don’t mean them literally. This is for illustrative purposes only.
But the overall point is that it’s like the Swiss cheese model of risk where three things have to go “wrong” for a problem to occur. But in this case, the thing that would go “wrong” is getting a lot of money, which has happened before with SBF’s chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadn’t done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/Cari Tuna or Jann Tallinn), I don’t think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesn’t seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% — or whatever it is — chance it happens doesn’t apply. It’s a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I don’t think they make people less corruptible.
Hey, thanks for this comment. To be clearer about my precise model, I don’t expect there to be new Anthropic billionaires or centimillionaires. Instead, I’m expecting dozens (or perhaps low hundreds) of software engineers who can afford to donate high six to low seven figure amounts per year.
Per levels.fyi, here is what anthropic comp might look like:
And employees who joined the firm early often had agreements of 3:1 donations matching for equity (that is, Anthropic would donate $3 for every $1 that the employee donates). My understanding is that Anthropic had perks like this specifically to try to recruit more altruistic-minded people, like EAs.
Further, other regrantors in the space agree that a lot more donations are coming.
(Also note that Austin is expecting 1-2 more OOMs of funding than me. He is also much more plugged into the actual scene.)
Here’s what historical data of EA grantmaking has looked like:
I anticipate that the new funds pouring in to specifically the EA ecosystem will not be at the scale of another OpenPhil (disbursing 500m+ per year), but there’s a small chance it might match the scale of GiveWell (~disbursing 200m per year; much focused on meta EA, x-risk, and longtermist goals than GW), and I would be very surprised if it fails to match SFF’s scale (disbursing ~30m a year) by the end of 2026.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWell’s recommended charities, or existing charities like the Future of Life Institute? If it’s a low percentage, the conversation seems moot.
I don’t know how much new funding Austin Chen is expecting.
My expectations are not contingent on Anthropic IPOing, and presumably neither is Austin’s. Employees are paid partially in equity, so some amount of financial engineering will be done to allow them to cash out, whether or not an IPO is happening.
I expect that, as these new donors are people working in the AI industry, a significant percentage is going to go into the broader EA community and not directly to GW. Double digit percentage for sure, but pretty wide CI.
They are also a fairly small non-profit and I think they would struggle to productively use significantly more funding in the short term. Scaling takes time and effort.
Anthropic’s been lately valued at $350b; if we estimate that eg 6% of that is in the form of equity allocated to employees, that’s $21B between the ~3000 they currently have, or an average of $7m/employee.
I think 6% is somewhat conservative and wouldn’t be surprised if it were more like 12-20%
Early employees have much (OOMs) more equity than new hires. Here’s one estimated generated by Claude and I:
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, that’s still mindboggling amounts of money. I’d guess that this is more like “10 new OpenPhils in the next 2-6 years”
I heard about the IPO rumors at the same time as everyone else (ie very recently), but for the last 6 months or so, the expectation was that Anthropic might have a ~yearly liquidity event, where Anthropic or some other buyer buys back employee stock up to some cap ($2m was thrown around as a figure)
As reported in other places, early Anthropic employees were offered a 3:1 match of donations of equity, iirc up to 50% of their total stock grant? New employees now are offered 1:1 match, but the 3:1 holds for the early ones (though not cofounders)
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, that’s still mindboggling amounts of money. I’d guess that this is more like “10 new OpenPhils in the next 2-6 years”
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so what’s your figure for Open Philanthropy/Coefficient Giving? Dustin Moskovitz’s net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that’s at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, that’s $10.5 billion. If they donate 50% of it to EA-related charities, that’s around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didn’t I understand? What did I miss?
(As a side note, the time horizon of 2-6 years is quite long...)
Differing priorities and timelines (ie focus on TAI) among Ants
Also, the Anthropic situation seems like it’ll be different than Dustin in that the number of individual donors (“principals”) goes up a lot—which I’m guessing leads to more grants at smaller sizes, rather than OpenPhil’s (relatively) few, giant grants
To be clear, “10 new OpenPhils” is trying to convey like, a gestalt or a vibe; how I expect the feeling of working within EA causes to change, rather than a rigorous point estimate
Dustin Moskovitz’s net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that’s at least $6 billion.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/CG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin’s time frame.
But if it’s $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employees’ lifetime net worth — as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, that’s around $5 billion over the next 2-6 years.
If Open Philanthropy/Coefficient Giving is spending $1 billion a year like you said, that’s around one OP/CG, not ten.
If OP/CG is really spending $1 billion/year, then OP/CG must have a lot more donations coming in from people other than Dustin Moskovitz or Cari Tuna than I realized. Either that or they’re spending down their fortune much faster than I thought.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/selling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel they’re still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesn’t identify with effective altruism), but I don’t understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
these are good questions and points. i have answers and explanations such that the points you raise do not particularly change my mind, but i feel aversion towards explaining them on a public forum. thanks for understanding.
My thought process is vaguely, hazily something like this:
There’s a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, there’s a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/centimillionaires backing up a truck full of money to meta-EA and EA funds, there’s a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/energy/attention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I don’t mean them literally. This is for illustrative purposes only.
But the overall point is that it’s like the Swiss cheese model of risk where three things have to go “wrong” for a problem to occur. But in this case, the thing that would go “wrong” is getting a lot of money, which has happened before with SBF’s chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadn’t done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/Cari Tuna or Jann Tallinn), I don’t think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesn’t seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% — or whatever it is — chance it happens doesn’t apply. It’s a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I don’t think they make people less corruptible.
Hey, thanks for this comment. To be clearer about my precise model, I don’t expect there to be new Anthropic billionaires or centimillionaires. Instead, I’m expecting dozens (or perhaps low hundreds) of software engineers who can afford to donate high six to low seven figure amounts per year.
Per levels.fyi, here is what anthropic comp might look like:
And employees who joined the firm early often had agreements of 3:1 donations matching for equity (that is, Anthropic would donate $3 for every $1 that the employee donates). My understanding is that Anthropic had perks like this specifically to try to recruit more altruistic-minded people, like EAs.
Further, other regrantors in the space agree that a lot more donations are coming.
(Also note that Austin is expecting 1-2 more OOMs of funding than me. He is also much more plugged into the actual scene.)
Here’s what historical data of EA grantmaking has looked like:
I anticipate that the new funds pouring in to specifically the EA ecosystem will not be at the scale of another OpenPhil (disbursing 500m+ per year), but there’s a small chance it might match the scale of GiveWell (~disbursing 200m per year; much focused on meta EA, x-risk, and longtermist goals than GW), and I would be very surprised if it fails to match SFF’s scale (disbursing ~30m a year) by the end of 2026.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWell’s recommended charities, or existing charities like the Future of Life Institute? If it’s a low percentage, the conversation seems moot.
I don’t know how much new funding Austin Chen is expecting.
My expectations are not contingent on Anthropic IPOing, and presumably neither is Austin’s. Employees are paid partially in equity, so some amount of financial engineering will be done to allow them to cash out, whether or not an IPO is happening.
I expect that, as these new donors are people working in the AI industry, a significant percentage is going to go into the broader EA community and not directly to GW. Double digit percentage for sure, but pretty wide CI.
And funny you should mention FLI, they specifically say they do not accept funding from “Big Tech” and AI companies so I’m not sure where that leaves them.
They are also a fairly small non-profit and I think they would struggle to productively use significantly more funding in the short term. Scaling takes time and effort.
Appreciate the shoutout! Some thoughts:
Anthropic’s been lately valued at $350b; if we estimate that eg 6% of that is in the form of equity allocated to employees, that’s $21B between the ~3000 they currently have, or an average of $7m/employee.
I think 6% is somewhat conservative and wouldn’t be surprised if it were more like 12-20%
Early employees have much (OOMs) more equity than new hires. Here’s one estimated generated by Claude and I:
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, that’s still mindboggling amounts of money. I’d guess that this is more like “10 new OpenPhils in the next 2-6 years”
I heard about the IPO rumors at the same time as everyone else (ie very recently), but for the last 6 months or so, the expectation was that Anthropic might have a ~yearly liquidity event, where Anthropic or some other buyer buys back employee stock up to some cap ($2m was thrown around as a figure)
As reported in other places, early Anthropic employees were offered a 3:1 match of donations of equity, iirc up to 50% of their total stock grant? New employees now are offered 1:1 match, but the 3:1 holds for the early ones (though not cofounders)
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so what’s your figure for Open Philanthropy/Coefficient Giving? Dustin Moskovitz’s net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that’s at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, that’s $10.5 billion. If they donate 50% of it to EA-related charities, that’s around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didn’t I understand? What did I miss?
(As a side note, the time horizon of 2-6 years is quite long...)
Some factors that could raise giving estimates:
The 3:1 match
If “6%” is more like “15%”
Future growth of Anthropic stock
Differing priorities and timelines (ie focus on TAI) among Ants
Also, the Anthropic situation seems like it’ll be different than Dustin in that the number of individual donors (“principals”) goes up a lot—which I’m guessing leads to more grants at smaller sizes, rather than OpenPhil’s (relatively) few, giant grants
So, what is your actual math to get to 10x the size of Open Philanthropy?
To be clear, “10 new OpenPhils” is trying to convey like, a gestalt or a vibe; how I expect the feeling of working within EA causes to change, rather than a rigorous point estimate
Though, I’d be willing to bet at even odds, something like “yearly EA giving exceeds $10B by end of 2031”, which is about 10x the largest year per https://forum.effectivealtruism.org/posts/NWHb4nsnXRxDDFGLy/historical-ea-funding-data-2025-update.
2031 is far too far away for me to take an interest in a bet about this, but I proposed one for the end of 2026.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/CG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin’s time frame.
But if it’s $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employees’ lifetime net worth — as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, that’s around $5 billion over the next 2-6 years.
If Open Philanthropy/Coefficient Giving is spending $1 billion a year like you said, that’s around one OP/CG, not ten.
If OP/CG is really spending $1 billion/year, then OP/CG must have a lot more donations coming in from people other than Dustin Moskovitz or Cari Tuna than I realized. Either that or they’re spending down their fortune much faster than I thought.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/selling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel they’re still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesn’t identify with effective altruism), but I don’t understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
these are good questions and points. i have answers and explanations such that the points you raise do not particularly change my mind, but i feel aversion towards explaining them on a public forum. thanks for understanding.