Hey there, I’m Austin, currently running https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com!
Austin
Winners of the Manifund Essay Prize
Welcome to the EA Forum and thank you for posting this! I enjoy both Change.org and now Givedirectly. I agree with most of your points (and every one of your hot takes, I think!)
I’d push back a bit against “4. Use the index funds of giving.” One nitpick is that I’m not sure the analogy holds that well—charitable funds like Givewell and CG’s are more like mutual funds or hedge funds; you can’t actually passively index because there’s no simple metric of market cap to benchmark against. So implicitly, the choice of which fund to give to bakes in a bunch of worldview and effectiveness assumptions, and a lot of trust in the people running the charitable fund (unlike, say, VTI).
More broadly, I think that on the margin, there’s too much deference to charitable funds and too little “do your own research” in the EA space. (Though I understand that your original post on LinkedIn is angled for a wider audience, and there, I think Givewell—or Givedirectly! - is a great default rec).
The problem with charities/DAFs accepting pre-IPO stocks though is that they still need some way of liquidiating those stocks at the end of the day.
There’s also more exotic things that are possible, for a large enough donation size. A year ago before the Anthropic tenders were worked out, I had a proposal for lining up Anthropic employees & EA earn-to-give donors, and having them do a donation swap.
Some people (earn-to-give folks? banks?) may be willing to lend you money against your private/pre-IPO stock as well?
I think preparing for AI money is generally smart given Anthropic & OpenAI Foundation, though I don’t expect Ineffable specifically to have liquidity for at least a couple years.
It’s possible that there are some clever schemes that could allow David or others to start donating sooner (eg some liquidity at a raise, or borrowing against value of stock), but historically it’s not until IPO (and sometimes much later) before founders donate significant amounts.
soon! writing up some feedback on the winning essays and some reflections
I will consider this, thanks for the nudge!
Yes, Marcus Abramovitch and I put out this piece analyzing cost-effectiveness for AI safety youtubers specifically.
Manifund doesn’t have other pieces in the pipeline, but I would love for more work of this kind to exist, and I know other initiatives like https://grantmaking.ai/ are interested in finding qualified folks to do this kind of analysis at scale.
Thanks for asking!
My strong default prior is that most forprofits are good for the world, along the standard arguments: gains from trade, Paul Graham on wealth, the finding that corporations only keep 2.2% of value created
Moreover, I like when people who share my values start valuable companies, because they often spend that money on projects that are good. Mechanize doesn’t seem that different than Asana, or maybe Microsoft in this regard. Tamay has taken the GWWC pledge; I find Matthew’s writing (eg on AI rights) very informative.
Object-level, it seems like Mechanize mostly sells good code RL environments to Anthropic. Across the community, opinions on accelerating Anthropic capabilities are also mixed, but on balance I lean pro.
I personally benefit a lot from the good coding capabilities that Claude Code provides. This stage of AI/LLM development seems broadly good to me.
(Nb, my view wouldn’t change if they were mostly selling to OpenAI or GDM or something.)
On minor level, some people view that leaving Epoch was somehow a betrayal of Epoch or the funding they received; this seems quite fake. I strongly support individuals’ rights to branch out and start new orgs. In any case, it seems like Epoch has continued to do well.
I included this story as a short anecdote about Marcus’s ability to spot talent, make active investments, and convince founders to take the leap, all of which I expect to transfer into helping start great AI x Animal orgs. I understand that different people in EA/AI safety have different takes about whether Mechanize specifically is good or bad—I happen to think good or at least neutral.
(And I take responsibility for any factual errors with this specific anecdote. Talking to Marcus just now, it seems like his main nudge was to convince Ege/Matthew/Tamay that the nonprofit structure was wrong for what they wanted to accomplish.)
Manifund’s Falcon Fund
At Manifold we talked a lot about what kinds of markets/verticals to focus on and were always aware that sports gambling is a big demographic, while mostly choosing to stay out, partly for ideological reasons (not interesting to us), partly because we weren’t positioned for it.
I don’t think we registered a specific concrete prediction to this effect, but eg in our 2022 seed round memo (~4 months after we were founded) we called out Betfair and Draftkings as the largest available markets.
yup!
Would recommend paragraphs, as I think they read better. (Do as I say, not as I do!)
See also https://dynomight.substack.com/p/formatting
Manifund Essay Prize: on EA funding, the SF scene, and forecasting
Rented Virtue, by Will Manidis & Nabeel S. Qureshi
Very much agreed, though I’m guilty for not having done this myself; hope to fix this soon!
Two other donation writeups I really liked:
@richard_ngo on his 2025 giving strategy: https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=rxSTSbZugfTZ3tCuc
@Joel Becker on his experience regranting in 2022: https://joel-becker.com/digital-garden/regrantor/
There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions.
I like this framing for the botecs it encourages!
Currently it seems like each grantmaker is (on average) responsible for ~$10m/y. One question I think about sometimes: how will # of grantmakers scale as more $ go towards AI safety funding? If funding is eg 3x’ing year-over-year, it’s unclear whether we’re currently training up that number of grantmakers.
Another question might be: what is a good ratio of # of grantmakers to # of direct work? I’d ballpark there to be ~1000 fulltime AIS direct workers; does a 20:1 ratio seem high, low, or just right?
I’d be curious to look at comparisons for scaled funding ecosystems for a reference class; I’m primarily thinking VCs & angels, but perhaps others eg academic funding are also appropriate.
It’s a separate event run by CEA, which, in contrast to EAG, is much smaller and just for leaders in the field of xrisk. (I haven’t been, but my wife attended this 2026 edition)
I’m hiring for a variety of roles, which are mostly operational/community-shaped:
Nah, the real prize is the karma & engagement we receive along the way!