Launching Foresight Institute’s AI Grant for Underexplored Approaches to AI Safety – Apply for Funding!

Summary

We are excited to launch our new grant programme that will fund areas that we consider underexplored when it comes to AI safety. In light of the potential for shorter AGI timelines, we will re-grant $1-1.2 million per year to support much needed development in one of the following areas:

  1. Neurotechnology, Whole Brain Emulation, and Lo-fi uploading for AI safety

  2. Cryptography and Security approaches for Infosec and AI security

  3. Safe Multipolar AI scenarios and Multi-Agent games

Apply for funding– to be reviewed on a rolling basis.

See below, or visit our website to learn more!

Areas that We’re Excited to Fund

Neurotechnology, Whole Brain Emulation, and Lo-fi uploading for AI safety

We are interested in exploring the potential of neurotechnology, particularly Whole Brain Emulation (WBE) and cost-effective lo-fi approaches to uploading, that could be significantly sped up, leading to a re-ordering of technology arrival that might reduce the risk of unaligned AGI by the presence of aligned software intelligence.

We are particularly excited by the following:

  • WBE as a potential technology that may generate software intelligence that is human-aligned simply by being based directly on human brains

  • Lo-fi approaches to uploading (e.g. extensive lifetime video of a laboratory mouse to train a model of a mouse without referring to biological brain data)

  • Neuroscience and neurotech approaches to AI Safety (e.g. BCI development for AI safety)

  • Other concrete approaches in this area

  • General scoping/​mapping opportunities in this area, especially from a differential technology development perspective, as well as understanding the reasons why this area may not be a suitable focus

Cryptography and Security approaches for Infosec and AI security

To explore the potential benefits of Cryptography and Security technologies in securing AI systems. This includes:

  • Computer security to help with AI Infosecurity or approaches for scaling up security techniques to potentially apply to more advanced AI systems

  • Cryptographic and auxiliary techniques for building coordination/​governance architectures across different AI(-building) entities

  • Privacy-preserving verification/​evaluation techniques

  • Other concrete approaches in this area

  • General scoping/​mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area is not a good focus area

Safe Multipolar AI scenarios and Multi-Agent games

Exploring the potential of safe Multipolar AI scenarios, such as:

  • Multi-agent game simulations or game theory

  • Scenarios avoiding collusion and deception, and pareto-preferred and positive-sum dynamics

  • Approaches for tackling principal agent problems in multipolar systems

  • Other concrete approaches in this area

  • General scoping/​mapping opportunities in this area, especially from a differential technology development perspective, or exploring why this area is not a good focus area

Interested in Applying?

We look forward to receiving your submissions. Applications will be reviewed on a rolling basis–apply here.

For the initial application, you’ll be required to submit:

  • A background of yourself and your work

  • A short summary and budget for your project highlighting which area you are applying to, and outlining what you would like to investigate and why

  • At least two references

We will aim to get back to applicants within 8 weeks of receiving their application.

If you are interested then please find more information about the grant here.