Remmelt Ellen
Ryan Kidd
Help us support more research scholars!
Apart Research
Incubate AI safety research and develop the next generation of global AI safety talent via research sprints and research fellowships
Kunvar Thaman
Alexander Pan
Lawrence Chan
3 month
Zhonghao He
Surveying neuroscience for tools to analyze and understand neural networks and building a natural science of deep learning
Kabir Kumar
Dusan D Nesic
Free/Subsidized/Cheap office space outside of EU but in good timezones with favorable visa policies (especially for Chinese/Russian but also other citizens).
Jesse Hoogland
6-month funding for a team of researchers to assess a novel AI alignment research agenda that studies how structure forms in neural networks
Apollo Research
Hire 3 additional AI safety research engineers / scientists
Luan Rafael Marques de Oliveira
Support to translate BlueDot Impact’s AI alignment curriculum into (Br) Portuguese to be used in university study groups and an online course
Ethan Josean Perez
4 different projects (finding RLHF alignment failures, debate, improving CoT faithfulness, and model organisms)
Brian Tan
1.9 FTE for 9 months to pilot a training program in Manila exclusively focused on Mechanistic Interpretability
Rubi Hudson
joseph bloom
Trajectory Models and Agent Simulators
Cadenza Labs
We're a team of SERI-MATS alumni working on interpretability, seeking funding to continue our research after our LTFF grant ended.
Robert Krzyzanowski
Compute and infrastructure costs
Lisa Thiergart
Fazl Barez
Funding to establish a safety and interpretability lab within the Torr Vision Group (TVG) at Oxford