Connor Axiotes
Filming a feature-length documentary on risks from AI for a non-technical audience on streaming services
Tsvi Benson-Tilsen
accelerating strong human germline genomic engineering to make lots of geniuses
Asterisk Magazine
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Jai Dhyani
Developing AI Control for Immediate Real-World Use
John Sherman
Funding For Humanity: An AI Risk Podcast
Tyler John
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
Center for AI Policy
Francisco Carvalho
The nooscope will deliver public tools to map how ideas spread, starting with psyop detection, within 18 months
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
PauseAI US
SFF main round did us dirty!
Centre pour la Sécurité de l'IA
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.
4M+ views on AI safety: Help us replicate and scale this success with more creators