Seeking funding? We're kicking off grantmaking.ai with a $1M launch round.
Find, evaluate, and fund AI safety work, without rebuilding the map yourself.
We're a small non-profit team with experience building infrastructure for the AI safety community.
Why we're building this
Dozens of new funders are about to deploy billions into AI safety. The current system wasn't built for them.
Projects asking for <$50K are often too small for larger funds and too hard to discover casually.
Every donor rebuilds the same picture. Every grantee re-answers the same questions.
It's hard to tell who is actively fundraising and what their runway looks like.
Good opportunities and honest takes on them are still mostly passed around through private networks.
We're building the coordination layer the ecosystem needs: comprehensive public data, living funding status, and trusted signal — all in one place.
What's in the database
Here's a preview of the 454 organizations from our broader database.
A Dutch foundation that works to reduce existential risk by informing the public debate through media engagement, policy advocacy, research, and public events.
Team
Led by Otto Barten
Amount needed

Seeking funding?
We're partnering with Manifund to distribute $1M in grants for AI existential safety work.
Applications will be opening soon, with more details to follow. Sign up to get notified!
Not fundraising right now? We still want your profile in the database! We're building the comprehensive AI safety database — so when you do need funding, your profile is already there, already current, already visible to every funder.
Step 1
We'll start with public information where we can, so you're not always beginning from a blank page.
Step 2
Claim the profile, fix anything outdated, and add the details funders can't easily see elsewhere.
Step 3
An AI safety funding platform. A database of organizations, projects, and funds with a trust and signal layer on top.
Team
Led by Matt Brooks
Amount needed



Every public AI Safety funding opportunity, and the context you need, in one place.
See what experienced evaluators think, not just what projects say about themselves.
See who is actively fundraising and where marginal dollars can help the most.
Put your request in front of dozens of donors and get feedback in weeks, not months.



An international AI x-risk strategy think tank that conducts scenario research and governance analysis to mitigate risks from transformative AI technologies.
Team
Led by David Kristoffersson
Amount needed



The French Center for AI Safety (Centre pour la Securite de l'IA) is a Paris-based non-profit think tank and research center working to reduce risks from AI through education, technical research, and policy advocacy in France and Europe.
Team
Led by Charbel-Raphaël Segerie
Amount needed



A nonprofit that runs fellowships and educational programs to develop expert, mission-aligned talent for AI safety research and governance.
Team
Led by Diana Rendon
Amount needed



A Swiss non-profit think tank that develops evidence-based policy proposals on AI safety, biosecurity, and emerging technologies, bridging science, politics, and civil society for Switzerland and beyond.
Team
Led by Patrick Stadler
Amount needed



A non-profit AI alignment research organization focused on agent foundations, pursuing formal goal alignment approaches that would scale to superintelligence.
Team
Led by Tamsin Leake
Amount needed



Your profile remains part of the database, so future donors can find you faster even after the initial round.






