Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Connor Axiotes
Filming a feature-length documentary on risks from AI for a non-technical audience on streaming services
Asterisk Magazine
Guy
Out of This Box: The Last Musical (Written by Humans)
Mox
For AI safety, AI labs, EA charities & startups
Ronak Mehta
Funding for a new nonprofit organization focusing on accelerating and automating safety work.
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Remmelt Ellen
Cost-efficiently support new careers and new organisations in AI Safety.
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Tyler John
Jai Dhyani
Developing AI Control for Immediate Real-World Use
Scott Viteri
Compute Funding
Francesca Gomez
Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.
Matthew Farr
My allocated travel funding is insufficient. Seeking extra funding for flights, accommodation, etc, to present poster and network
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Epoch Artificial Intelligence, Inc.
For tracking and predicting future AI progress to AGI
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Centre pour la Sécurité de l'IA
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.
4M+ views on AI safety: Help us replicate and scale this success with more creators