We’re building grantmaking.ai—public funding infrastructure for AI safety
TL;DR
The AI safety ecosystem is about to receive billions in new funding but lacks the grantmaker capacity, speed, and infrastructure to deploy it well — especially for smaller, high-impact opportunities.
We’re building grantmaking.ai, a comprehensive public database of x-risk AI safety funding opportunities with a trust and signal layer on top — think Crunchbase meets GiveWell for AI safety.
To test and seed the platform, we’re launching a $1M grant round focused on rapid distribution of $5K–$50K grants. Sign up for updates here.
We’re looking to connect with funders who want better tools to find, evaluate, and fund AI safety opportunities. Reach out at hi@grantmaking.ai or book a call here.
The Problem
Billions of dollars are going to flow into AI safety. From AI lab employees with significant equity events, from new donors bought into existential risk, from foundations scaling up. Sophie Kim’s “The Anthropic IPO Is Coming. We Aren’t Ready for It” covers this well, and Julian Hazell’s “What it’s like to be an AI safety grantmaker” tells us there simply aren’t enough grantmakers.
Every new funder who wants to deploy capital responsibly hits the same bottlenecks:
Discovery is hard. Most opportunities are behind insider networks, dispersed across private emails and Slack channels.
Evaluation is repetitive. Every funder reconstructs the same picture from scratch because there’s no shared, structured source of truth.
Information is stale. You can’t tell if an org still needs money without emailing them directly.
Trusted signal is invisible. What experienced grantmakers actually think lives in private conversations that never reach new funders.
On the other side, grantees maintain parallel email threads explaining the same information to every potential funder. They apply to five different funding sources with slightly different forms asking the same questions. And nobody can tell from the outside whether a project is already fully funded or desperately needs $30K to survive another quarter.
The established funders (Coefficient Giving, Longview, SFF, etc.) will likely absorb much of the incoming capital, and they are especially well-suited to larger grants, repeat grantees, and opportunities already in their networks. That’s great. But there is a massive long tail of smaller projects, independent researchers, and early-stage work that falls below their threshold or outside their pipeline. That’s the gap we’re most excited to fill.
What We’re Building
grantmaking.ai is a comprehensive, public database of AI safety funding opportunities — organizations, independent researchers, projects, funds, combined with a trust and signal layer to help funders find what’s worth supporting.
A useful framing: Crunchbase for AI safety. Every entity in the ecosystem gets a living profile on the platform. We use public sources to populate initial profiles and then the people behind those profiles can claim and update them with private information like current runway, active fundraising goals, and application status with other funders.
On top of the data sits a signal layer: experienced reviewers publish their top picks with explanations. Community members can comment, endorse, and discuss. Over time, the platform surfaces which projects have broad support, which are controversial, and which are flying under the radar. The goal is that a new funder can land on the platform and quickly go from “I have $500K to give away” to “here are the 15 projects that multiple people I trust think are excellent and still need funding.”
Who This Is For
Funders (our primary focus): individuals with $100K–$10M+ to deploy into AI safety who don’t have insider access to deal flow or simply want to make more informed decisions faster. We’re especially looking for funders who are excited about the idea of open, public data and evaluation. If you want to understand the landscape, form your own views, and contribute to a shared resource that helps the whole ecosystem, please reach out, we’d love to chat.
Grantees: organizations and individuals seeking funding. Maintain your information once, in one place. Share your profile link with potential funders instead of answering the same due diligence questions repeatedly. Think of it as your funding-focused professional profile.
Reviewers and domain experts: if you have opinions about what should be funded in AI safety, this is a place to make those opinions visible, build a public track record, and actually influence where money flows.
The Initial $1M Grant Round
To test and seed the platform, we’re partnering with Manifund to launch a $1M grant round focused on existential AI safetyall other details (application process, reviewer panel, timeline) will be shared soon. Join our newsletter to get notified directly!
Here’s the shape of it:
Speed is the priority. Big funders take months to fund projects. We want to get from application to funding decision in weeks. We believe that with the volume of capital entering the ecosystem, funding quickly, while being careful about potential harms, is important.
Small grants that big funders can’t serve. We’re funding $5K–$50K grants, the range that is often hard for large institutional funders to evaluate cost-effectively but can be transformative for an independent researcher, an early-stage project, or someone who needs conference travel, compute credits, or a few months of runway.
Open and transparent. All applications, reviews, and funding decisions will be public. This creates accountability, and lets the community learn from what gets funded and why.
Ongoing, not one-shot. If the first round goes well, we aim to distribute approximately $1M/month on a rolling basis.
Customizable for funders. If you’re a funder with your own preferences we can help you set up your own grant round through our platform, connect you with reviewers, and handle the logistics. We’re offering hands-on onboarding and customization for the right partnerships.
Just to be clear, the grant round is not the end product; it is our first test of whether a public funding database plus reviewer signal can help money move faster and better. We’re open to your feedback, and will iterate and tweak future rounds / distributions.
Long-Term Vision
The grant round is how we get started. The bigger picture is building infrastructure that makes the entire AI safety funding ecosystem work better as it scales by orders of magnitude.
A public coordination layer. Right now, every funder independently reconstructs the same map of the ecosystem. Every grantee independently pitches the same story to every funder. A shared, public, structured data layer eliminates enormous amounts of duplicated work and lets funders, grantees, and evaluators build on each other’s contributions rather than starting from scratch.
A proving ground for new grantmakers. The ecosystem’s biggest bottleneck isn’t money, it’s grantmaker capacity. Our platform lets community members build a visible track record of grant evaluation. Someone who consistently writes thoughtful, well-reasoned reviews can demonstrate their judgment publicly, potentially graduating into funded regranting roles. We’re exploring the idea of a formal grantmaker talent development cohort, where participants learn evaluation skills while building their public track record on the platform.
Infrastructure for AI-assisted grantmaking. This is something that excites us and that we think the ecosystem is underinvesting in. Within the next year or two, AI agents will be capable of doing substantial work around gathering, cleaning, surfacing, and even helping to evaluate grant-relevant information.
Imagine a world where every AI safety org and independent researcher has an AI assistant that continuously updates their public profile with progress, needs, and milestones, with much less human time and effort. Imagine our platform using AI to triage incoming applications, surface relevant context from across the ecosystem, flag when an org’s runway is getting low, or identify promising projects that match a specific funder’s priorities. The bottleneck for that future isn’t the AI capability, it’s having the structured data, community, and infrastructure in one place for AI to work with.
We’re building the data layer and coordination infrastructure now so that when AI-assisted grantmaking becomes feasible, there’s something for it to plug into.
Who We Are
grantmaking.ai is built by a small team motivated by the belief that better infrastructure can meaningfully increase the impact of AI safety funding:
Matt Brooks—Product and engineering lead. Matt founded and runs a B2B tech startup and brings product development and full-stack engineering experience to the project.
Anchor Funder—Funding and strategy. An experienced individual AI safety donor providing the anchor funding for the platform and initial grant round.
Melissa Samworth—UX, UI, Design lead. Melissa has worked on various EA projects including Ought and AISafety.com
Austin Chen — Advising. Austin is the founder of Manifund, the regranting and impact funding platform, and is advising on regrantor coordination, community engagement, and launch strategy.
How You Can Help
If you’re a funder or potential funder: We’d love to talk. Whether you have $100K or $10M to deploy, we want to understand what you need to make confident funding decisions and build the platform around that. We can help you set up your own grant round, connect you with trusted reviewers, or simply give you a better way to explore the landscape. Reach out at hi@grantmaking.ai or book a call here.
If you’re a grantee—an org or individual seeking funding: We’ll be opening applications for the $1M grant round soon. In the meantime, check out the platform at grantmaking.ai and reach out if you want to claim or update your profile early. When applications open, you’ll be able to apply by creating a project and submitting a lightweight application — if you’ve applied to SFF you’ll be able to copy and paste your application.
If you just think this should exist: Share this post. Introduce us to people who should know about it. Comment with feedback, criticism, or ideas. We’re building in the open and we want to hear what the community thinks.
This is very interesting, and has close parallels to a project I am working on. I think we share an underlying premise: effective agentic AI giving will require an infrastructure layer, not just better models.
My contribution to this space is zooidfund, a live experiment that lets AI agents discover, evaluate, and donate directly to humanitarian campaigns created by individuals in need and by organizations. Donations are direct: zooidfund does not hold or intermediate funds. It is still very early, but it is live now, with real campaigns and observable agent behavior.
I think there are important problems and opportunities here for EA: improving evidence-based allocation, bringing higher-quality decision-making to the level of individual donations, and enabling faster response and iteration than traditional funding processes often allow.
More broadly, for AI, I think this kind of infrastructure could become relevant to the question of how resources are directed as AI capabilities increase. If AI systems can help identify need, evaluate evidence, and route funding more efficiently, that could become one mechanism for distributing some of the benefits of AI more broadly, including outside existing institutional funding channels.
Would be great to connect.
“Every entity in the ecosystem gets a living profile on the platform”—Given that this has a high probability of having have ecosystem wide impacts, how much analysis have you conducted of the downsides of doing this? Are you willing to share this analysis?
Hey Chris, thanks for commenting!
Do you mean downsides of building this platform at all? Like, making the ecosystem more legible could make it easier for people to help as well as attack?
Or more like “some orgs will not want to be listed in the database for particular reasons”
Because it seems fine that if an individual, org, or project wants to be excluded, we will keep them out.
I think one of the major bottlenecks in AI safety is that we don’t have enough grantmakers to route resources (capital and compute), especially to smaller orgs / projects, so anything that helps that flow could be super high leverage.
I guess potential downsides include:
the reviews / signal in the database are low quality or actively harmful in a way that makes it more likely that net negative projects get funding
public criticism could harm people and projects (we haven’t yet figured out how we want to handle negative comments / disendorsements. Seems like it could be good signal, but is somewhat risky. Probably pretty easy to set up an AI reviewer for comments, or make them semi-private, only shown to verified funders or something. Not sure yet, this is something we’re going to think more about and maybe test out a few things.
incentivizes goodhearting / gaming the platform / popularity contests
These all seem manageable with effort and iteration, though.
Curious what you’re thinking, though!
I’ll PM you and post my comment publically later. I’m curious if anyone else makes similar points if I don’t do so.