I’ll bet you a $10 donation to the charity of your/my choice that by December 31, 2026, not all three of these things will be true:
Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.[1]
More than $100 million in new money[2] (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund[3] (managed at least in part by people with active, existing, at least slightly prominent roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofsky’s historical role in the EA movement and EA-related grantmaking, I’m excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.)
A survey of least ten representative and impartial EA Forum users (with accounts created before December 10, 2025 and at least 50 karma) will find that more than 50% believe it’s at least 10% likely that this very EA Forum post on which we’re commenting (as well as any/all other posts on the same topic this month) reduced by at least 1% the amount of corruption, loss of virtue, or undue influence relating to that $100+ million in a way that could not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is at least 10% likely.[4]
I think that at least one and possibly two or all three of these things won’t be true by December 31, 2026. If at least one of them isn’t true, I win the bet. If all three are true, you win the bet.
I think December 31, 2026 is a reasonable deadline because if this still hasn’t happened by then, then my fundamental point that this conversation is premature will have been proven right.
I’m open to counter-offers.
I’m also open to making this same bet with anyone else, including if more than one person wants to bet me. (Anyone else can feel free to counter-offer me as well.)
If the market cap goes below $200 billion for more than a few days, it probably means the AI bubble popped, and any future donations from Anthropic employees have become highly uncertain.
This is the hardest condition/resolution criterion to operationalize in a way you can bet on. I’m trying to be lenient while at the same time avoid formulating it such that no matter what empirically happens, the survey respondents/judges will say that starting the discussion now is above the expected value threshold. I’d be willing to abandon this condition/criterion if it’s too hard to agree on, but then the bet would be missing one of the cruxes of the disagreement (possibly the most important one).
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? It’s the second part that I’m doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and that’s hard to predict. Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing?
You’re probably doubting this because you don’t think it’s a good way to spend money. But that doesn’t mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/community building.
If there is an influx of money into ‘that sort of thing’ in 2026/2027, I’d expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Different in what ways? Edit: You kind of answered this in your edit, but what I’m getting at is: SBF’s giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
I’m also thinking that Daniela Amodei said this about effective altruism earlier this year:
I’m not the expert on effective altruism. I don’t identify with that terminology. My impression is that it’s a bit of an outdated term.
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing
I guess lots of money will be given. Seems reasonable to think about the impacts of that. Happy to bet.
I’ll bet you a $10 donation to the charity of your/my choice that by December 31, 2026, not all three of these things will be true:
Anthropic will have successfully completed an IPO at a valuation of at least $200 billion and its market cap will have remained above $200 billion.[1]
More than $100 million in new money[2] (so, at least $100 million more than in 2025 or 2024, and from new sources) will be donated to EA Funds or a new explicitly EA-affiliated fund similar to the FTX Future Fund[3] (managed at least in part by people with active, existing, at least slightly prominent roles in the EA community as of December 10, 2025) by Anthropic employees in 2026 other than Daniela Amodei, Holden Karnofsky, or Dario Amodei. (Given Karnofsky’s historical role in the EA movement and EA-related grantmaking, I’m excluding him, his wife, and his brother-in-law from consideration as potentially corrupting influences.)
A survey of least ten representative and impartial EA Forum users (with accounts created before December 10, 2025 and at least 50 karma) will find that more than 50% believe it’s at least 10% likely that this very EA Forum post on which we’re commenting (as well as any/all other posts on the same topic this month) reduced by at least 1% the amount of corruption, loss of virtue, or undue influence relating to that $100+ million in a way that could not have been done by waiting to have the conversation until after the Anthropic IPO was officially announced. Or a majority of 1-3 judges we agree on believe that is at least 10% likely.[4]
I think that at least one and possibly two or all three of these things won’t be true by December 31, 2026. If at least one of them isn’t true, I win the bet. If all three are true, you win the bet.
I think December 31, 2026 is a reasonable deadline because if this still hasn’t happened by then, then my fundamental point that this conversation is premature will have been proven right.
I’m open to counter-offers.
I’m also open to making this same bet with anyone else, including if more than one person wants to bet me. (Anyone else can feel free to counter-offer me as well.)
If the market cap goes below $200 billion for more than a few days, it probably means the AI bubble popped, and any future donations from Anthropic employees have become highly uncertain.
I could possibly be talked down to $50 million.
There might be a better way to operationalize what meta-EA or EA regranting means. I’m open to suggestions.
This is the hardest condition/resolution criterion to operationalize in a way you can bet on. I’m trying to be lenient while at the same time avoid formulating it such that no matter what empirically happens, the survey respondents/judges will say that starting the discussion now is above the expected value threshold. I’d be willing to abandon this condition/criterion if it’s too hard to agree on, but then the bet would be missing one of the cruxes of the disagreement (possibly the most important one).
Do you think lots of money will just be given to EA-related charities such as the Against Malaria Foundation, the Future of Life Institute, and so on (that sounds plausible to me) or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing? It’s the second part that I’m doubting.
I suppose a lot of it comes down to what specifically Daniela Amodei and Holden Karnofsky decide to do if their family has their big liquidity event, and that’s hard to predict. Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
You’re probably doubting this because you don’t think it’s a good way to spend money. But that doesn’t mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/community building.
If there is an influx of money into ‘that sort of thing’ in 2026/2027, I’d expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
Different in what ways? Edit: You kind of answered this in your edit, but what I’m getting at is: SBF’s giving was indiscriminate and disorganized. Do you think the Anthropic nouveau riche will give money as freely to random people in EA?
I’m also thinking that Daniela Amodei said this about effective altruism earlier this year:
Maybe it was just a clumsy, off-the-cuff comment. But it makes me go: hmm.
She’s gonna give her money to meta-EA?
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
That’s a really good point!
I guess my next thought is: are we worried about Holden Karnofsky corrupting effective altruism? Because if so, I have bad news…