Connor Axiotes
Filming a feature-length documentary on risks from AI for a non-technical audience on streaming services
Asterisk Magazine
Tsvi Benson-Tilsen
accelerating strong human germline genomic engineering to make lots of geniuses
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Center for AI Policy
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
John Sherman
Funding For Humanity: An AI Risk Podcast
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Tyler John
Jai Dhyani
Developing AI Control for Immediate Real-World Use
Francisco Carvalho
The nooscope will deliver public tools to map how ideas spread, starting with psyop detection, within 18 months
Nuño Sempere
A foresight and emergency response team seeking to react fast to calamities
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Piotr Zaborszczyk
Reach the university that trained close to 20% of OpenAI early employees
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jonathan Claybrough
5 day bootcamp upskilling participants on biosecurity, to enable and empower career change towards reducing biorisks, from ML4Good organisers
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
PauseAI US
SFF main round did us dirty!
Centre pour la Sécurité de l'IA
Distilling AI safety research into a complete learning ecosystem: textbook, courses, guides, videos, and more.
4M+ views on AI safety: Help us replicate and scale this success with more creators