Foresight Institute, a Bay Area 501(c)3, has received funding to support projects in three AI safety areas we think are currently under-explored and under-funded:
1. Neurotechnology, brain-computer interface, whole brain emulation, and "lo-fi" uploading approaches to build human-aligned software intelligence
2. Computer security, cryptography, and related techniques to help secure AI systems
3. Multi-agent simulations, game theory and related techniques to create safe multipolar AI scenarios that avoid collusion and foster positive sum dynamics.
The grant application process is now open and accepts rolling applications with the goal to have a friction-less and fast turnaround so projects to spin up and experiment easily. We plan to grant circa $1M per year across projects that could make a significant difference for AI safety within short timelines.
Since the launch of the grant program in August 2023, we have funded 10 projects to a total of $440K.
We received significantly more promising applications than we expected and are currently funding-constrained in two ways:
Of all applications we received, there were 12 additional applications that we thought were worthy of support which we were not able to fund due to lack of funds (with a total requested amount of $890K -$3.1M across these 12 applications).
Of the applications we were able to fund, we were often only able to only support a lower bound of requested funding even though we believe that a total of $2.4M could have been meaningfully applied across funded applications in this first grant round.
A Manifund grant would enable us to increase the amount of funding for promising projects in the ways described above.
Please visit https://foresight.org/ai-safety for full details and application instructions.
Goals
The goal of the grant program is to advance progress in three areas we deem underexplored when it comes to AI safety under short AI timelines.
The goals in-depth:
BCI, WBE, and "lo-fi" uploading approaches to produce human-aligned software intelligence
Explore if neurotechnologies, in particular BCI or WBE development (or lo-fi approaches to uploading which may be more cost-effective) could be sped up enough and be made safe enough to decrease the risk of unaligned AGI via the presence of human-aligned software intelligence.
This includes exploring ideas such as:
WBE as a potential technology that may generate software intelligence which is human-aligned simply by being based directly on human brains
Lo-fi approaches to uploading (e.g. extensive lifetime video of an organism that could be used to train a model of that organism without referring to biological brain data)
Other neuroscience and neurotech approaches to AI safety (e.g. BCI development for AI safety)
Other concrete approaches in this area
General scoping/mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area may not be impactful
2. Computer security, cryptography, and related techniques to help secure AI systems
Projects that leverage the potential benefits of cryptography and security technologies for securing AI systems. This includes:
Computer security to help with AI infosecurity or approaches for scaling up security techniques to potentially apply to more advanced AI systems
Cryptographic and auxiliary techniques for building coordination/governance architectures across different AI(-building) entities
Privacy-preserving verification/evaluation techniques
Other concrete approaches in this area
General scoping/mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area may not be impactful
3. Multi agent simulations, game theory and related techniques to create safe multipolar AI scenarios that avoid collusion and foster positive sum dynamics
Explore the potential of safe multipolar AI scenarios, such as:
Multi-agent game simulations or game theory
Scenarios that avoid collusion and deception, and/or encourage pareto-preferred and positive-sum dynamics
Approaches for tackling principal-agent problems in multipolar systems
Other concrete approaches in this area
General scoping/mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area may not be impactful
How we achieve the goals
We achieve these goals by supporting outstanding project proposals in these areas. This involves advertisement of the grant in relevant communities, a rigorous vetting process of grant applications by our technical advisors, and follow-on support and regular milestone check-ins for successful grantees.
Now that the standard grant process is set up and receives a large pool of continuous applications, we could effectively make use of additional funding in these three domains.
We are also open to receiving dedicated funding to only one of our three domains if this is of particular interest to the funder.
80% of the funding will be used to increase the funding pool for our AI safety grants and could be deployed immediately. It would be used to fund new applications, and in some cases to increase the funding of particularly promising prior applications that we were not able to fund to the extent we deemed ideal.
20% will be used for the administration costs of the granting program. This includes for example grant evaluation, grant processing, due diligence on grantees, transfer fees, advisor fees, and impact evaluation on the granting program and the specific grants given.
Minimum funding and funding goal
The minimum funding request is $5K as this is likely the smallest amount that could meaningfully increase the funding of our grant applicants.
The funding goal is set to $1M as this would double the amount of funding we could meaningfully distribute without increasing our overhead by a lot.
Any amount between $5K and $1M would be much appreciated and be used in the ways discussed above.
Any amount between the funding goal of $1M and our maximum bound of $3M would also be used as described above but may increase our overhead to ensure we can distribute it fast yet meaningfully.
Allison Duettmann:
Allison Duettmann is the president and CEO of Foresight Institute, a leading Bay Area research non-profit founded in 1986 to advance technology for the benefit of life. She initiated the institute's fellowship in 2017, so has experience in helping select and support researchers in achieving their goals. She dedicated her MS to AI safety, and since 2017, has hosted Foresight's annual AI strategy workshops, which gather researchers, poliy makers, and industry to explore coordination and governance dynamics around AGI, so she has an overview of the AI safety space. Since 2020, these workshops started focusing more on security and cryptography-related approaches to AI safety, which are two areas Foresight has had a historically strong community in, so she has insight into these sub-domains that the AI safety grant addresses as well. She co-authored a book, Gaming the Future, focused on technologies for secure multipolar cooperation dynamics, initiated the Norm Hardy Prize for Computer Security, and most recently the AI safety grant.
Beatrice Erkers:
Beatrice Erkers serves as the Chief of Operations and Co-Director of the AI Grant Program at the Foresight Institute. Her role involves managing the institute's operations and directing the AI Grant Program, with a focus on advancing AI technologies in a responsible and ethical manner.
Niamh Peren:
Niamh Peren is Chief of Strategy and Innovation, co-Director of Foresight Institute’s AI Safety Grant, with a special interest in Brain Computer Interfaces. She recently finished managing the report for the ‘AI for Nature and Climate Workshop’, which Foresight Institute hosted in partnership with Bezos Earth Fund. In her role she oversees Foresight Institute’s programs, including prizes, fellowships, outreach, and fundraising.
I could see the following two areas being the most likely areas of failure:
Most of the funded projects turn out to be lower quality than currently expected or encounter other roadblocks that prevent them from hitting their proposed milestones for progress.
The selected three cause areas turn out to not be impactful to AI safety so that even successful projects have little impact on the end goal of safe AI. In this case, we would need to
An existing recurring donation dedicated to fund the grant program of ca $1M per year.