Researching donation opportunities. Previously: ailabwatch.org.
Zach Stein-Perlman
+1
Random take: people underrate optionality / information value. Even within EA, few opportunities are within 5x of the best opportunities (even on the margin), due to inefficiencies in the process by which people get informed about donation opportunities. Waiting to donate is great if it increases your chances of donating very well. Almost all of my friends regret their past donations; they wish they’d saved money until they were better-informed.
Random take: there are still some great c3 opportunities, but hopefully after the Anthropic people eventually get liquidity they’ll fill all of the great c3 opportunities.
Some public c3 donation opportunities I like are The Midas Project (small funding gap + no industry money), Forethought, and LTFF/ARM.
Random take: you should really invest your money to get a high return rate.
I’m not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the “digital sentience” space never think about the von Neumann probes.
Oh, clarification: it’s very possible that there aren’t great grant opportunities by my lights. It’s not like I’m aware of great opportunities that the other Zach isn’t funding. I should have focused more on expected grants than Zach’s process.
Thanks. I’m somewhat glad to hear this.
One crux is that I’m worried that broad field-building mostly recruits people to work on stuff like “are AIs conscious” and “how can we improve short-term AI welfare” rather than “how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with.” So the field-building feels approximately zero-value to me — I doubt you’ll be able to steer people toward the important stuff in the future.
A smaller crux is that I’m worried about lab-facing work similarly being poorly aimed.
I endorse Longview’s Frontier AI Fund; I think it’ll give to high-marginal-EV AI safety c3s.
I do not endorse Longview’s Digital Sentience Fund. (This view is weakly held. I haven’t really engaged.) I expect it’ll fund misc empirical and philosophical “digital sentience” work plus unfocused field-building — not backchaining from averting AI takeover or making the long-term future go well conditional on no AI takeover. I feel only barely positive about that. (I feel excited about theoretical work like this.)
$500M+/year in GCR spending
Wait, how much is it?https://www.openphilanthropy.org/grants/page/4/?q&focus-area%5B0%5D=global-catastrophic-risks&yr%5B0%5D=2025&sort=high-to-low&view-list=true lists $240M in 2025 so far.
I have a decent understanding of some of the space. I feel good about marginal c4 money for AIPN and SAIP. (I believe AIPN now has funding for most of 2026, but I still feel good about marginal funding.)
There are opportunities to donate to politicians and PACs which seem 5x as impactful as the best c4s. These are (1) more complicated and (2) public. If you’re interested in donating ≥$20K to these, DM me. This is only for US permanent residents.
I’m confident the timing was a coincidence. I agree that (novel, thoughtful, careful) posting can make things happen.
I mostly agree with the core claim. Here’s how I’d put related points:
Impact is related to productivity, not doing-your-best.
Praiseworthiness is related to doing-your-best, not productivity.
But doing-your-best involves maximizing productivity.
Increasing hours-worked doesn’t necessarily increase long-run productivity. (But it’s somewhat suspiciously convenient to claim that it doesn’t, and for many people it would.)
I haven’t read all of the relevant stuff in a long time but my impression is Bio/Chem High is about uplifiting novices and Critical is about uplifting experts. See PF below. Also note OpenAI said Deep Research was safe; it’s ChatGPT Agent and GPT-5 which it said required safeguards.
I haven’t really thought about it and I’m not going to. If I wanted to be more precise, I’d assume that a $20 subscription is equivalent (to a company) to finding a $20 bill on the ground, assume that an ε% increase in spending on safety cancels out an ε% increase in spending on capabilities (or think about it and pick a different ratio), and look at money currently spent on safety vs capabilities. I don’t think P(doom) or company-evilness is a big crux.
fwiw I think you shouldn’t worry about paying $20/month to an evil company to improve your productivity, and if you want to offset it I think a $10/year donation to LTFF would more than suffice.
The thresholds are pretty meaningless without at least a high-level standard, no?
One problem is that donors would rather support their favorite research than a mixture that includes non-favorite research.
I’m optimistic about the very best value-increasing research/interventions. But in terms of what would actually be done at the margin, most work that people would do for “value-increasing” reasons would be confused/doomed, I expect (and this is less true for AI safety).
I think for many people, positive comments would be much less meaningful if they were rewarded/quantified, because you would doubt that they’re genuine. (Especially if you excessively feel like an imposter and easily seize onto reasons to dismiss praise.)
I disagree with your recommendations despite agreeing that positive comments are undersupplied.
Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?
For example:
Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I’m skeptical and I haven’t seen this written up.
(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)
One key question for the debate is: what can we do / what are the best ways to “increas[e] the value of futures where we survive”?
My guess is it’s better to spend most effort on identifying possible best ways to “increas[e] the value of futures where we survive” and arguing about how valuable they are, rather than arguing about “reducing the chance of our extinction [vs] increasing the value of futures where we survive” in the abstract.
I want to make salient these propositions, which I consider very likely:
In expectation, almost all of the resources our successors will use/affect comes via von Neumann probes (or maybe acausal trade or affecting the simulators).
If 1, the key question for evaluating a possible future from scope-sensitive perspectives is will the von Neumann probes be launched, and what is it that they will tile the universe with? (modulo acausal trade and simulation stuff)
[controversial] The best possible thing to tile the universe with (maybe call it “optimonium”) is wildly better than what you get if you not really optimizing for goodness,[1] so given 2, the key question is will the von Neumann probes tile the universe with ~the best possible thing (or ~the worst possible thing) or something else?
Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)
- ^
Call computronium optimized to produce maximum pleasure per unit of energy “hedonium,” and that optimized to produce maximum pain per unit of energy “dolorium,” as in “hedonistic” and “dolorous.” Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.
So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the “hedons per joule” or “hedons per computation” of hedonium (call this H), minus the expected production of dolorium, multiplied by “dolors per joule” or “dolors per computation” (call this D).
Quick take on longtermist donations for giving tuesday.
My favorite donation opportunity is Alex Bores’s congressional campaign. I also like Scott Wiener’s congressional campaign.
If you have to donate to a normal longtermist 501c3, I think Forethought, METR, and The Midas Project—and LTFF/ARM and Longview’s Frontier AI Fund—are good and can use more money (and can’t take Good Ventures money). But I focus on evaluating stuff other than normal longtermist c3s, because other stuff seems better and has been investigated much less; I don’t feel very strongly about my normal longtermist c3 recommendations.
Some friends and I have nonpublic recommendations less good than Bores but ~4x as good as the normal longtermist c3s above, according to me.