I find so much EA analysis, in general, to be too clever by half. (Per Wiktionary: âShrewd but flawed by overthinking or excessive complexity, with a resulting tendency to be unreliable or unsuccessful.â) So many conversations like this could be helped along by just having a simpler and more commonsense analysis. Does EA need to have a big conversation right now about how to handle it if EA suddenly gets tons of money? Probably not.
Expecting the money to come in sounds like wishful thinking. Even if there are Anthropic billionaires with liquidity in 2026 or 2027 (which is not guaranteed to happen), even if these billionaires are influenced by EA and want to give money to some of the same charities or cause areas as people in EA care about, who says the money is going to flow through the EA community/âmovement? If I were an Anthropic billionaire, rather than trying to be Sam Bankman-Fried 2.0 and just spraying a firehose of money at the EA community generally, I would pick the charities I want to donate to and give to them directly.
Besides Sam Bankman-Fried, the other billionaires who have donated to EA-related charities and causes like Dustin Moskovitz/âCari Tuna and Jaan Tallinn have completely managed their own giving. Sam Bankman-Friedâs behaviour in general was impulsive and chaotic â it seems like his financial crimes were less likely to be rational calculation and seem more like poor impulse control or a general disinhibition, as crime often is â and the way he gave money to EA seems like an extension of that. A more careful person probably wouldnât do it that way. They would probably start a private foundation, hire some people to help manage it, and run it quietly out of the public view. Maybe they would take the unusual step of starting something like Open Philanthropy/âCoefficient Giving and do their giving in a more public-facing way. But even so, this is still under their control and not the EA communityâs control.
If some Anthropic billionaire does just back a truck full of money up to the EA community, thatâs a good problem to have, and thatâs the sort of problem you can digest and adapt to as it starts happening. You donât need to invest a lot of your limited resources of time, energy, and attention to it 6 months to 3 years in advance, when itâs not actually clear it will ever happen at all. (This isnât an asteroid, you donât need to fret about long-tail risk.)
Iâll bet you a $10 donation to the charity of your/âmy choice that by December 31, 2026, not all three of these things will be true:
Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.[1]
More than $100 million in new money[2] (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund[3] (managed at least in part by people with active, existing, at least slightly prominent roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofskyâs historical role in the EA movement and EA-related grantmaking, Iâm excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.)
A survey of least ten representative and impartial EA Forum users (with accounts created before December 10, 2025 and at least 50 karma) will find that more than 50% believe itâs at least 10% likely that this very EA Forum post on which weâre commenting (as well as any/âall other posts on the same topic this month) reduced by at least 1% the amount of corruption, loss of virtue, or undue influence relating to that $100+ million in a way that could not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is at least 10% likely.[4]
I think that at least one and possibly two or all three of these things wonât be true by December 31, 2026. If at least one of them isnât true, I win the bet. If all three are true, you win the bet.
I think December 31, 2026 is a reasonable deadline because if this still hasnât happened by then, then my fundamental point that this conversation is premature will have been proven right.
Iâm open to counter-offers.
Iâm also open to making this same bet with anyone else, including if more than one person wants to bet me. (Anyone else can feel free to counter-offer me as well.)
If the market cap goes below $200 billion for more than a few days, it probably means the AI bubble popped, and any future donations from Anthropic employees have become highly uncertain.
This is the hardest condition/âresolution criterion to operationalize in a way you can bet on. Iâm trying to be lenient while at the same time avoid formulating it such that no matter what empirically happens, the survey respondents/âjudges will say that starting the discussion now is above the expected value threshold. Iâd be willing to abandon this condition/âcriterion if itâs too hard to agree on, but then the bet would be missing one of the cruxes of the disagreement (possibly the most important one).
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? Itâs the second part that Iâm doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and thatâs hard to predict. Given Karnofskyâs career history, he doesnât seem like the kind of guy to want to just outsource his familyâs philanthropy to EA funds or something like that.
lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing?
Youâre probably doubting this because you donât think itâs a good way to spend money. But that doesnât mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/âcommunity building.
If there is an influx of money into âthat sort of thingâ in 2026/â2027, Iâd expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Different in what ways? Edit: You kind of answered this in your edit, but what Iâm getting at is: SBFâs giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
Iâm also thinking that Daniela Amodei said this about effective altruism earlier this year:
Iâm not the expert on effective altruism. I donât identify with that terminology. My impression is that itâs a bit of an outdated term.
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
Given Karnofskyâs career history, he doesnât seem like the kind of guy to want to just outsource his familyâs philanthropy to EA funds or something like that.
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing
My thought process is vaguely, hazily something like this:
Thereâs a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, thereâs a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/âcentimillionaires backing up a truck full of money to meta-EA and EA funds, thereâs a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/âenergy/âattention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I donât mean them literally. This is for illustrative purposes only.
But the overall point is that itâs like the Swiss cheese model of risk where three things have to go âwrongâ for a problem to occur. But in this case, the thing that would go âwrongâ is getting a lot of money, which has happened before with SBFâs chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadnât done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/âCari Tuna or Jann Tallinn), I donât think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesnât seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% â or whatever it is â chance it happens doesnât apply. Itâs a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I donât think they make people less corruptible.
Hey, thanks for this comment. To be clearer about my precise model, I donât expect there to be new Anthropic billionaires or centimillionaires. Instead, Iâm expecting dozens (or perhaps low hundreds) of software engineers who can afford to donate high six to low seven figure amounts per year.
Per levels.fyi, here is what anthropic comp might look like:
And employees who joined the firm early often had agreements of 3:1 donations matching for equity (that is, Anthropic would donate $3 for every $1 that the employee donates). My understanding is that Anthropic had perks like this specifically to try to recruit more altruistic-minded people, like EAs.
Further, other regrantors in the space agree that a lot more donations are coming.
(Also note that Austin is expecting 1-2 more OOMs of funding than me. He is also much more plugged into the actual scene.)
Hereâs what historical data of EA grantmaking has looked like:
I anticipate that the new funds pouring in to specifically the EA ecosystem will not be at the scale of another OpenPhil (disbursing 500m+ per year), but thereâs a small chance it might match the scale of GiveWell (~disbursing 200m per year; much focused on meta EA, x-risk, and longtermist goals than GW), and I would be very surprised if it fails to match SFFâs scale (disbursing ~30m a year) by the end of 2026.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWellâs recommended charities, or existing charities like the Future of Life Institute? If itâs a low percentage, the conversation seems moot.
I donât know how much new funding Austin Chen is expecting.
My expectations are not contingent on Anthropic IPOing, and presumably neither is Austinâs. Employees are paid partially in equity, so some amount of financial engineering will be done to allow them to cash out, whether or not an IPO is happening.
I expect that, as these new donors are people working in the AI industry, a significant percentage is going to go into the broader EA community and not directly to GW. Double digit percentage for sure, but pretty wide CI.
They are also a fairly small non-profit and I think they would struggle to productively use significantly more funding in the short term. Scaling takes time and effort.
Anthropicâs been lately valued at $350b; if we estimate that eg 6% of that is in the form of equity allocated to employees, thatâs $21B between the ~3000 they currently have, or an average of $7m/âemployee.
I think 6% is somewhat conservative and wouldnât be surprised if it were more like 12-20%
Early employees have much (OOMs) more equity than new hires. Hereâs one estimated generated by Claude and I:
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, thatâs still mindboggling amounts of money. Iâd guess that this is more like â10 new OpenPhils in the next 2-6 yearsâ
I heard about the IPO rumors at the same time as everyone else (ie very recently), but for the last 6 months or so, the expectation was that Anthropic might have a ~yearly liquidity event, where Anthropic or some other buyer buys back employee stock up to some cap ($2m was thrown around as a figure)
As reported in other places, early Anthropic employees were offered a 3:1 match of donations of equity, iirc up to 50% of their total stock grant? New employees now are offered 1:1 match, but the 3:1 holds for the early ones (though not cofounders)
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, thatâs still mindboggling amounts of money. Iâd guess that this is more like â10 new OpenPhils in the next 2-6 yearsâ
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so whatâs your figure for Open Philanthropy/âCoefficient Giving? Dustin Moskovitzâs net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so thatâs at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, thatâs $10.5 billion. If they donate 50% of it to EA-related charities, thatâs around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didnât I understand? What did I miss?
(As a side note, the time horizon of 2-6 years is quite long...)
Differing priorities and timelines (ie focus on TAI) among Ants
Also, the Anthropic situation seems like itâll be different than Dustin in that the number of individual donors (âprincipalsâ) goes up a lotâwhich Iâm guessing leads to more grants at smaller sizes, rather than OpenPhilâs (relatively) few, giant grants
To be clear, â10 new OpenPhilsâ is trying to convey like, a gestalt or a vibe; how I expect the feeling of working within EA causes to change, rather than a rigorous point estimate
Dustin Moskovitzâs net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so thatâs at least $6 billion.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/âCG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austinâs time frame.
But if itâs $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employeesâ lifetime net worth â as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, thatâs around $5 billion over the next 2-6 years.
If Open Philanthropy/âCoefficient Giving is spending $1 billion a year like you said, thatâs around one OP/âCG, not ten.
If OP/âCG is really spending $1 billion/âyear, then OP/âCG must have a lot more donations coming in from people other than Dustin Moskovitz or Cari Tuna than I realized. Either that or theyâre spending down their fortune much faster than I thought.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/âselling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel theyâre still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesnât identify with effective altruism), but I donât understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
these are good questions and points. i have answers and explanations such that the points you raise do not particularly change my mind, but i feel aversion towards explaining them on a public forum. thanks for understanding.
I find so much EA analysis, in general, to be too clever by half. (Per Wiktionary: âShrewd but flawed by overthinking or excessive complexity, with a resulting tendency to be unreliable or unsuccessful.â) So many conversations like this could be helped along by just having a simpler and more commonsense analysis. Does EA need to have a big conversation right now about how to handle it if EA suddenly gets tons of money? Probably not.
Expecting the money to come in sounds like wishful thinking. Even if there are Anthropic billionaires with liquidity in 2026 or 2027 (which is not guaranteed to happen), even if these billionaires are influenced by EA and want to give money to some of the same charities or cause areas as people in EA care about, who says the money is going to flow through the EA community/âmovement? If I were an Anthropic billionaire, rather than trying to be Sam Bankman-Fried 2.0 and just spraying a firehose of money at the EA community generally, I would pick the charities I want to donate to and give to them directly.
Besides Sam Bankman-Fried, the other billionaires who have donated to EA-related charities and causes like Dustin Moskovitz/âCari Tuna and Jaan Tallinn have completely managed their own giving. Sam Bankman-Friedâs behaviour in general was impulsive and chaotic â it seems like his financial crimes were less likely to be rational calculation and seem more like poor impulse control or a general disinhibition, as crime often is â and the way he gave money to EA seems like an extension of that. A more careful person probably wouldnât do it that way. They would probably start a private foundation, hire some people to help manage it, and run it quietly out of the public view. Maybe they would take the unusual step of starting something like Open Philanthropy/âCoefficient Giving and do their giving in a more public-facing way. But even so, this is still under their control and not the EA communityâs control.
If some Anthropic billionaire does just back a truck full of money up to the EA community, thatâs a good problem to have, and thatâs the sort of problem you can digest and adapt to as it starts happening. You donât need to invest a lot of your limited resources of time, energy, and attention to it 6 months to 3 years in advance, when itâs not actually clear it will ever happen at all. (This isnât an asteroid, you donât need to fret about long-tail risk.)
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
Iâll bet you a $10 donation to the charity of your/âmy choice that by December 31, 2026, not all three of these things will be true:
Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.[1]
More than $100 million in new money[2] (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund[3] (managed at least in part by people with active, existing, at least slightly prominent roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofskyâs historical role in the EA movement and EA-related grantmaking, Iâm excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.)
A survey of least ten representative and impartial EA Forum users (with accounts created before December 10, 2025 and at least 50 karma) will find that more than 50% believe itâs at least 10% likely that this very EA Forum post on which weâre commenting (as well as any/âall other posts on the same topic this month) reduced by at least 1% the amount of corruption, loss of virtue, or undue influence relating to that $100+ million in a way that could not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is at least 10% likely.[4]
I think that at least one and possibly two or all three of these things wonât be true by December 31, 2026. If at least one of them isnât true, I win the bet. If all three are true, you win the bet.
I think December 31, 2026 is a reasonable deadline because if this still hasnât happened by then, then my fundamental point that this conversation is premature will have been proven right.
Iâm open to counter-offers.
Iâm also open to making this same bet with anyone else, including if more than one person wants to bet me. (Anyone else can feel free to counter-offer me as well.)
If the market cap goes below $200 billion for more than a few days, it probably means the AI bubble popped, and any future donations from Anthropic employees have become highly uncertain.
I could possibly be talked down to $50 million.
There might be a better way to operationalize what meta-EA or EA regranting means. Iâm open to suggestions.
This is the hardest condition/âresolution criterion to operationalize in a way you can bet on. Iâm trying to be lenient while at the same time avoid formulating it such that no matter what empirically happens, the survey respondents/âjudges will say that starting the discussion now is above the expected value threshold. Iâd be willing to abandon this condition/âcriterion if itâs too hard to agree on, but then the bet would be missing one of the cruxes of the disagreement (possibly the most important one).
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? Itâs the second part that Iâm doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and thatâs hard to predict. Given Karnofskyâs career history, he doesnât seem like the kind of guy to want to just outsource his familyâs philanthropy to EA funds or something like that.
Youâre probably doubting this because you donât think itâs a good way to spend money. But that doesnât mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/âcommunity building.
If there is an influx of money into âthat sort of thingâ in 2026/â2027, Iâd expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Different in what ways? Edit: You kind of answered this in your edit, but what Iâm getting at is: SBFâs giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
Iâm also thinking that Daniela Amodei said this about effective altruism earlier this year:
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
Sheâs gonna give her money to meta-EA?
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
Thatâs a really good point!
I guess my next thought is: are we worried about Holden Karnofsky corrupting effective altruism? Because if so, I have bad newsâŠ
My thought process is vaguely, hazily something like this:
Thereâs a ~50% chance Anthropic will IPO within the next 2 years.
Conditional on an Anthropic IPO, thereâs a ~50% chance any Anthropic billionaires or centimillionaires will give tons of money to meta-EA or EA funds.
Conditional on Anthropic billionaires/âcentimillionaires backing up a truck full of money to meta-EA and EA funds, thereâs a ~50% chance that worrying about the potential corrupting effects of the money well in advance is a good allocation of time/âenergy/âattention.
So, the overall chance this conversation is important to have now is ~10%.
The ~50% probabilities and the resulting ~10% probability are totally arbitrary. I donât mean them literally. This is for illustrative purposes only.
But the overall point is that itâs like the Swiss cheese model of risk where three things have to go âwrongâ for a problem to occur. But in this case, the thing that would go âwrongâ is getting a lot of money, which has happened before with SBFâs chaotic giving, and has been happening continuously in a more careful way with Open Philanthropy (now Coefficient Giving) since the mid-2010s.
If SBF had made his billions from selling vegan ice cream and hadnât done any scams or crimes, and if he had been more careful and organized in the way he gave the money (e.g. been a bit more like Dustin Moskovitz/âCari Tuna or Jann Tallinn), I donât think people would be as worried about the prospect of getting a lot of money again.
Even if the situation were like SBF 2.0, it doesnât seem like the downsides of that would be that bad or that hard to deal with (compared to how things in EA already are right now), so the logic of carefully preparing for a big impact risk on the ~10% â or whatever it is â chance it happens doesnât apply. Itâs a small impact risk with a low probability.
And, overall, I just think the conversations like this I see in EA are overly anxious, overly complicate things, and intellectualize too much. I donât think they make people less corruptible.
Hey, thanks for this comment. To be clearer about my precise model, I donât expect there to be new Anthropic billionaires or centimillionaires. Instead, Iâm expecting dozens (or perhaps low hundreds) of software engineers who can afford to donate high six to low seven figure amounts per year.
Per levels.fyi, here is what anthropic comp might look like:
And employees who joined the firm early often had agreements of 3:1 donations matching for equity (that is, Anthropic would donate $3 for every $1 that the employee donates). My understanding is that Anthropic had perks like this specifically to try to recruit more altruistic-minded people, like EAs.
Further, other regrantors in the space agree that a lot more donations are coming.
(Also note that Austin is expecting 1-2 more OOMs of funding than me. He is also much more plugged into the actual scene.)
Hereâs what historical data of EA grantmaking has looked like:
I anticipate that the new funds pouring in to specifically the EA ecosystem will not be at the scale of another OpenPhil (disbursing 500m+ per year), but thereâs a small chance it might match the scale of GiveWell (~disbursing 200m per year; much focused on meta EA, x-risk, and longtermist goals than GW), and I would be very surprised if it fails to match SFFâs scale (disbursing ~30m a year) by the end of 2026.
How much new funding is Austin Chen expecting? Is it conditional on an Anthropic IPO? Are your expectations conditional on an Anthropic IPO?
I suppose the whole crux of the matter is even if there is additional ~$300-400 million per year, what percentage will go into meta-EA, EA funds, general open grantmaking, or the broader EA community as opposed to GiveWell, GiveWellâs recommended charities, or existing charities like the Future of Life Institute? If itâs a low percentage, the conversation seems moot.
I donât know how much new funding Austin Chen is expecting.
My expectations are not contingent on Anthropic IPOing, and presumably neither is Austinâs. Employees are paid partially in equity, so some amount of financial engineering will be done to allow them to cash out, whether or not an IPO is happening.
I expect that, as these new donors are people working in the AI industry, a significant percentage is going to go into the broader EA community and not directly to GW. Double digit percentage for sure, but pretty wide CI.
And funny you should mention FLI, they specifically say they do not accept funding from âBig Techâ and AI companies so Iâm not sure where that leaves them.
They are also a fairly small non-profit and I think they would struggle to productively use significantly more funding in the short term. Scaling takes time and effort.
Appreciate the shoutout! Some thoughts:
Anthropicâs been lately valued at $350b; if we estimate that eg 6% of that is in the form of equity allocated to employees, thatâs $21B between the ~3000 they currently have, or an average of $7m/âemployee.
I think 6% is somewhat conservative and wouldnât be surprised if it were more like 12-20%
Early employees have much (OOMs) more equity than new hires. Hereâs one estimated generated by Claude and I:
Even after discounting for standard vesting terms (4 years), % of EAs, and % allocated to charity, thatâs still mindboggling amounts of money. Iâd guess that this is more like â10 new OpenPhils in the next 2-6 yearsâ
I heard about the IPO rumors at the same time as everyone else (ie very recently), but for the last 6 months or so, the expectation was that Anthropic might have a ~yearly liquidity event, where Anthropic or some other buyer buys back employee stock up to some cap ($2m was thrown around as a figure)
As reported in other places, early Anthropic employees were offered a 3:1 match of donations of equity, iirc up to 50% of their total stock grant? New employees now are offered 1:1 match, but the 3:1 holds for the early ones (though not cofounders)
Can you explain this math for me? The figure you started with is $21 billion in Anthropic equity, so whatâs your figure for Open Philanthropy/âCoefficient Giving? Dustin Moskovitzâs net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so thatâs at least $6 billion. $21 billion is only 3.5x more than $6 billion, not 10x.
If you think of that $21 billion in Anthropic equity, 50% is owned by people who identify with or like effective altruism, thatâs $10.5 billion. If they donate 50% of it to EA-related charities, thatâs around $5 billion. So, even on these optimistic assumptions, that would only be around one Open Philanthropy (now Coefficient Giving), not ten.
What didnât I understand? What did I miss?
(As a side note, the time horizon of 2-6 years is quite long...)
Some factors that could raise giving estimates:
The 3:1 match
If â6%â is more like â15%â
Future growth of Anthropic stock
Differing priorities and timelines (ie focus on TAI) among Ants
Also, the Anthropic situation seems like itâll be different than Dustin in that the number of individual donors (âprincipalsâ) goes up a lotâwhich Iâm guessing leads to more grants at smaller sizes, rather than OpenPhilâs (relatively) few, giant grants
So, what is your actual math to get to 10x the size of Open Philanthropy?
To be clear, â10 new OpenPhilsâ is trying to convey like, a gestalt or a vibe; how I expect the feeling of working within EA causes to change, rather than a rigorous point estimate
Though, Iâd be willing to bet at even odds, something like âyearly EA giving exceeds $10B by end of 2031â, which is about 10x the largest year per https://ââforum.effectivealtruism.org/ââposts/ââNWHb4nsnXRxDDFGLy/ââhistorical-ea-funding-data-2025-update.
2031 is far too far away for me to take an interest in a bet about this, but I proposed one for the end of 2026.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/âCG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austinâs time frame.
But if itâs $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employeesâ lifetime net worth â as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, thatâs around $5 billion over the next 2-6 years.
If Open Philanthropy/âCoefficient Giving is spending $1 billion a year like you said, thatâs around one OP/âCG, not ten.
If OP/âCG is really spending $1 billion/âyear, then OP/âCG must have a lot more donations coming in from people other than Dustin Moskovitz or Cari Tuna than I realized. Either that or theyâre spending down their fortune much faster than I thought.
Oh, so if this is not IPO-contingent, what explains the timing on this? Why 2026 or 2027 and not 2025 or 2024?
I do know there are platforms like Forge Global and Hiive that allow for buying/âselling shares in private startups on the secondary market. I just wonder why a lot of people would be selling their shares in 2026 or 2027, specifically, rather than holding onto them longer. I think many employees of these AI companies are true believers in the growth story and the valuation story for these companies, and might be reluctant to sell their equity at a time when they feel theyâre still in the most rapid growth phase of the company.
Any particular reason to think many people out of these dozens or hundreds of nouveau riche will want to donate to meta-EA? I understand the argument for people like Daniela Amodei and Holden Karnofsky to give to meta-EA (although, as noted in another comment, Daniela Amodei says she doesnât identify with effective altruism), but I donât understand the argument for a lot of smaller donors donating to meta-EA.
Interesting footnote about the Future of Life Institute. Would that apply to a software engineer working for OpenAI or Anthropic, or just a donation directly from one of those companies?
My general point about established charities like the Future of Life Institute or any other example you care to think about is that most donors will probably prefer to donate directly to charities rather than donating through an EA fund or a regranter. And most will probably want to donate to things other than meta-EA.
these are good questions and points. i have answers and explanations such that the points you raise do not particularly change my mind, but i feel aversion towards explaining them on a public forum. thanks for understanding.