Grantmaking.ai
AboutData

AI safety funding may grow by $1B+
But the infrastructure isn't ready

Grantmaking.ai is the open evaluation platform for AI safety funding. Every opportunity, review, and signal in one place. Letting new funders participate without starting from scratch.

Share your thoughts - comment on our design doc

The ecosystem is broken for new funders

AI lab employees and newly motivated donors have capital to deploy. But the current system wasn't built for them.

Discovery is insider-only

The best opportunities are hidden behind personal networks. It can take months of time and effort to break in.

Research is duplicated

Every funder asks the same questions. Every grantee answers them again.

Information is stale

An org's website says “seeking funding” but they closed their round six months ago. You can't know without asking.

Signal is invisible

What do people you trust think about a project? That lives in private conversations and the heads of experienced grantmakers.

Grant evaluation shouldn't require insider access.
We're making it open.

See what experienced funders and domain experts think — and why. Reviews, picks, and community signal, all in one place.

Public reviews

See who said what. Weight signal by source, not volume.

Regrantor picks

Follow trusted evaluators. See what they recommend and why.

Public track records

Evaluators build reputation. Good judgment gets recognized over time.

What we've got

Everything you need to move capital for high impact.

Every opportunity in one place

Orgs, researchers, projects, funds, and grantmakers — searchable, browsable, always growing.

The context you actually need

Quick scan, one-pager, and deep-dive docs. Team backgrounds, contacts, and theories of change.

Live funding status

Runway, burn rate, and funder pipeline — with clear timestamps so you know what's current.

Seeking funding?

We build your profile from public sources so you don't have to start from zero. Claim it, add context, and stop repeating yourself to every funder.

Step 1

We find you

Your profile is auto-created from public data — grants, publications, and web presence.

Step 2

You claim it

Verify ownership, correct anything, and add the details only you know.

Step 3

Funders find you

Visible to every funder on the platform. No more cold emails or network-dependent discovery.

Built to last, funded to launch

A multi-year track record shipping tools for the AI safety community.

Anchor-funded

Committed capital to distribute through the platform at launch.

Open & nonprofit

Grant-funded infrastructure, not a business. All public data stays public.

Community-shaped

Designed with funders, regrantors, and grantees from the start.

Stay up to date

Organizations in the AI Safety Ecosystem

Explore a fast preview of the organizations working to make AI safer.

Previewing 684 organizations and 1,503 grants from the broader database.

5050

5050 is a free 12-14 week company-builder program run by Fifty Years that helps scientists, researchers, and engineers become deep-tech startup founders, with a dedicated AI safety track.

San Francisco, CAestablished

80,000 Hours

80,000 Hours is a nonprofit that provides free research, career advice, and a job board to help people find careers that effectively tackle the world's most pressing problems, with a current focus on AI safety.

London, United Kingdommature

AAAI/ACM Conference on Artificial Intelligence, Ethics and Society

AIES is a peer-reviewed academic conference series jointly organized by AAAI and ACM that brings together a multidisciplinary community to examine the ethical, social, and policy dimensions of artificial intelligence.

mature

Access Now

International digital rights organization founded in 2009. Annual revenue approximately \$15M (as of 2025). Led by Executive Director Alejandro Mayoral Banos (succeeded Brett Solomon who served 2009-2024). AI governance work rooted in human rights law. Led coalition of 110+ civil society organizations in EU AI Act advocacy on transparency safeguards. Research covers LLM privacy risks and surveillance of migrants. Co-Executive Director Amba Kak (of AI Now Institute) was named TIME 100 Most Influe

ACM FAccT

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) is a premier peer-reviewed academic conference that brings together researchers and practitioners to investigate fairness, accountability, and transparency in socio-technical systems.

established

ACX Atlanta

ACX Atlanta (The Atlanta Moloch Slayers) is a monthly in-person meetup group for rationalists and readers of the Slate Star Codex and Astral Codex Ten blogs in Atlanta, Georgia.

Atlanta, Georgia, USAseed

Ada Lovelace Institute

Independent UK research institute established by the Nuffield Foundation in 2018, focused on ensuring data and AI work for people and society. Produces influential research on algorithmic accountability, facial recognition regulation, biometrics governance, and AI in the public sector. A leading European voice bridging technical research with human rights-focused policy.

Adam Jermyn

Adam Jermyn is a physicist and AI safety researcher at Anthropic, working on neural network interpretability and inner alignment. He previously conducted independent AI alignment research after transitioning from a career in computational astrophysics.

Boston, MA, USAestablished

Advanced Research + Invention Agency (ARIA)

ARIA is a UK government research funding agency that backs high-risk, high-reward R&D in underexplored areas, including a major £59 million programme on formal mathematical safety guarantees for AI systems.

London, United Kingdommature
Explore the full database — 684 organizations
Grantmaking.ai

© 2026 Grantmaking.ai

Community signal

Votes and reviews from funders and domain experts, with visible identity so you can weigh the source.

Periodic project updates

Investor-style updates from funded projects. One place to track how the work is actually going.

Open data, open API

Pull any public data programmatically. Run your own analysis, build your own tools, plug in agents.