Project summary
Apart Research is at a pivotal moment. In the past 2.5 years, we've built a global pipeline for AI safety research and talent that has produced 22 peer-reviewed publications in venues like ICLR, NeurIPS, ICML, and ACL, engaged 3,500+ participants in 42 research sprints across 50+ global locations, and helped launch top talent into AI safety careers.
Our impact spans research excellence, talent development, and policy influence: Two of our recent publications received Oral Spotlights at ICLR 2025 (top 1.8% of accepted papers) and our research has been cited by leading AI and AI safety labs. Our participants have landed jobs at METR, Oxford, Far.ai, and in impactful founder roles, while our policy engagement includes presenting at premier forums like IASEAI and serving as expert consultants to the EU AI Act Code of Practice. Major tech publications have featured our work, extending our influence beyond academic circles. Without immediate funding, this momentum will stop in June 2025.
Read our Impact Report here: https://apartresearch.com/donate
What are this project's goals? How will you achieve them?
Our primary goals are to:
Convert untapped technical talent into AI safety researchers: Via our global research sprints (200+ monthly participants), identify individuals with exceptional potential from tech and science backgrounds and get them to contribute to AI safety immediately
Produce high-impact technical AI safety research: Publish 10-15 new peer-reviewed papers on critical challenges including interpretability, evaluation methodologies for critically dangerous capabilities, and AGI security and control; enable horizon-scanning for important research topics in AI safety via our open-ended research hackathons
Place trained researchers at key organizations: Support 30+ Lab Fellows at any given time, preparing them for roles at leading AI safety institutions, non-profits, and startups
We'll achieve these through our proven three-part model:
Global Research Sprints: Weekend-long events across 50+ locations identifying promising researchers and novel approaches
Studio Program: 4-week accelerator developing the best sprint ideas into substantive research proposals
Lab Fellowship: 3-6 month intensive global program for publication-quality work with compute resources, project management, and mentorship
Our model excels at rapidly identifying and developing talent with significant counterfactual impact. For example, one of the participants of our March 2024 METR x Apart hackathon, a serial entrepreneur with a physics and robotics background, joined METR as a full-time member of technical staff largely because of our event. Shortly after our event, he also contributed to a research project in our lab, which he presented at ICLR 2025 (and which received an oral spotlight). Similar success stories have occurred for fellows landing jobs at Oxford, Far.ai, founding impactful AI safety startups, or establishing new AI safety teams in high-growth organizations.
How will this funding be used?
The funding will directly fund our talent and research acceleration pipeline. Our budget breakdown for 12 months is (scale down accordingly):
Staff Compensation ($691,200, 73%):
Research Project Management ensuring fellows produce publication-quality work
Research Engineering providing technical support and automation across projects and talent pipelines
Sprint & core operations ensuring program effectiveness, follow-up, and impact
Program Related Costs ($156,000, 16%):
Direct Program Expenses ($54,000): Lab & Studio infrastructure, research software, fellow conference travel and attendance
Travel Costs ($102,000): Team travel, conference attendance, meals and accommodations
Indirect Costs & Fiscal Sponsorship ($107,600, 11%):
Indirect Expenses ($60,000): Software & subscription costs, office rental and other necessary operational expenses
Fiscal Sponsorship ($47,600): Costs incurring through our agreement with Ashgro for their accounting, legal support, and non-profit status retention
Our current ask of $954,800 represents a budget of 12 months. Towards that number, we have multiple funding milestones:
$120,000 will be enough to keep our position in AI safety and expand our automated field building and research tooling, despite the need to cut down staff and all programs.
$238,700 is the minimum amount we need to continue our research and events work for three months, providing opportunity for the Apart community.
$477,400 will give us until the end of the year, giving hundreds of people a chance to partake and contribute to AI safety.
$954,800 will enable us to continue into 2026 with our research and events work, creating impact for thousands of people.
Who is on your team? What's your track record on similar projects?
Our team combines research expertise and operational excellence:
Jason Hoelscher-Obermaier (Research Director): Quantum optics PhD, AI engineer at multiple startups, PIBBSS fellow, and director of research
Natalia Pérez-Campanero (Research Project Manager): PhD in Bioengineering, former program manager at Royal Society's talent accelerator
Archana Vaidheeswaran (Community Program Manager): Board member at Women in ML, experienced in organizing workshops with 2,000+ participants co-located with major ML conferences
Jaime Raldúa (Research Engineer): 8+ years ML engineering experience with multiple key contributions to software stacks at impactful EA orgs
Advisors:
Esben Kran: Co-founder and advisor
Finn Metz: Operations and funding advisor
Christian Schroeder de Witt: Research advisor
Eric Ries: Strategic advisor
Nick Fitz: Organizational development advisor
Track Record:
22 peer-reviewed AI safety publications, including at ICLR, NeurIPS, and ACL
Two papers receiving Oral Spotlights at ICLR 2025 (top 1.8% of accepted papers)
42 global research sprints engaging 3,500+ participants
100+ researchers incubated through our Lab Fellowship
Fellows landing jobs at METR, Oxford, Far.ai, founding impactful AI safety startups, or establishing new AI safety teams in high-growth organizations
Research cited by OpenAI's Superalignment team and other major AI labs
What are the most likely causes and outcomes if this project fails?
The most likely failure modes are:
Insufficient funding: Without adequate resources, we would be forced to disband a high-functioning team built over 2.5 years, losing a proven talent pipeline at a critical time for AI safety and canceling valuable talent capital and research projects. Mitigation: We have already diversified our funding drastically, including partnerships and sponsorships.
Research relevance and impact: Our research may not keep up with rapidly evolving field priorities and we could face diminishing returns on novelty for our research hackathon model. Mitigation: We maintain close collaboration with leading AI labs and safety organizations to continuously align our research priorities, while our model allows for rapid adaptation to emerging safety concerns and to previously neglected topic areas.
Opportunity cost: With AI capabilities advancing rapidly, moving fast now is necessary to keep critical momentum at precisely the time when safety research is most needed. Mitigation: Our model is designed for efficiency and rapid adaptation, allowing us to maximize impact per dollar invested while prioritizing time-sensitive work on urgent and impactful research areas, such as by prioritizing research inputs for the General-Purpose AI Code of Practice.
Talent pipeline execution risk: Challenges in maintaining quality across talent in our work between global mid-career talent and early-career researchers and avoiding overlap with other programs. Mitigation: We have systematic evaluation metrics for participants, and strategic focus on technical backgrounds and locations where we complement rather than compete with existing programs. Examples of differentiation include being remote-first and part-time, essential for helping mid-career individuals transition, and focusing on strong research management, helping non-academics succeed in research.
Industry and partnership challenges: Difficulties in launching new programs, ensuring partner alignment, and continuously facilitating high quality connections between stakeholders. Mitigation: We've built strong connections with leaders and researchers at key organizations, established formal partnership agreements with clear expectations, and designed our talent pipeline to align with the needs of the AI safety field. We expand our sponsorship setup where e.g. $5k in compute is provided by Lambda Labs to every team for free.
Broader ecosystem risks: Public skepticism of AI safety work could negatively impact donor perception and fundraising efforts. Mitigation: We maintain transparent operations, publish our research openly, engage constructively with diverse perspectives, and focus our messaging on concrete technical contributions.
If we fail to maintain Apart Research, the field would lose:
A proven pipeline for identifying and developing global technical talent in AI safety
An efficient mechanism for exploring novel research directions at scale
A bridge between diverse technical communities and established AI safety organizations
How much money have you raised in the last 12 months, and from where?
In the past 12 months, Apart Research has raised approximately $680,000 from:
We've also previously received support from the Long-Term Future Fund (LTFF), Foresight & ACX. This funding has enabled us to build our team and infrastructure, but our current funding expires in June 2025, necessitating this fundraising round to maintain our operations.