Seeking funding? We're kicking off grantmaking.ai with a $1M launch round.
Find, evaluate, and fund AI safety work — without rebuilding the map yourself.
We're a small team with experience building infrastructure for the AI safety community.
Why we're building this
Dozens of new funders are about to deploy billions into AI safety. The current system wasn't built for them.
Good opportunities and honest takes on them are still mostly passed around through private networks.
Every donor rebuilds the same picture. Every grantee re-answers the same questions.
It's hard to tell who is actively fundraising, what their runway looks like, or whether a round is already close to full.
Projects asking for <$50K are often too small for larger funds and too hard to discover casually.
We're building the coordination layer the ecosystem needs: comprehensive public data, living funding status, and trusted signal — all in one place.
What's in the database
Here's a preview of the 454 organizations and 1,769 grants from our broader database.
A Dutch foundation that works to reduce existential risk by informing the public debate through media engagement, policy advocacy, research, and public events.
Team
Led by Otto Barten
Amount needed

Seeking funding?
We're kicking off the platform with a $1M grant round for existential AI safety work, distributed in partnership with Manifund.
Application process, reviewer panel, and timeline coming soon — sign up to get notified.
Not fundraising right now? We still want your profile in the database! We're building the comprehensive AI safety database — so when you do need funding, your profile is already there, already current, already visible to every funder.
Step 1
We'll start with public information where we can, so you're not always beginning from a blank page.
Step 2
Claim the profile, fix anything outdated, and add the details funders can't easily see elsewhere.
Step 3
An AI safety funding platform. A database of organizations, projects, and funds with a trust and signal layer on top.
Team
Led by Matt Brooks
Amount needed



Browse the ecosystem without needing a dozen private intros.
See what experienced evaluators think, not just what projects say about themselves.
Get the information you need to act without rebuilding the whole picture from scratch.
Time-bounded rounds to fund smaller, under-served opportunities quickly.



An international AI x-risk strategy think tank that conducts scenario research and governance analysis to mitigate risks from transformative AI technologies.
Team
Led by David Kristoffersson
Amount needed



The French Center for AI Safety (Centre pour la Securite de l'IA) is a Paris-based non-profit think tank and research center working to reduce risks from AI through education, technical research, and policy advocacy in France and Europe.
Team
Led by Charbel-Raphaël Segerie
Amount needed



A nonprofit that runs fellowships and educational programs to develop expert, mission-aligned talent for AI safety research and governance.
Team
Led by Diana Rendon
Amount needed



A Swiss non-profit think tank that develops evidence-based policy proposals on AI safety, biosecurity, and emerging technologies, bridging science, politics, and civil society for Switzerland and beyond.
Team
Led by Patrick Stadler
Amount needed



A non-profit AI alignment research organization focused on agent foundations, pursuing formal goal alignment approaches that would scale to superintelligence.
Team
Led by Tamsin Leake
Amount needed



Your profile remains part of the database, so future donors can find you faster even after the initial round.






