Notes on UK AISI Alignment Project

UK AISI recently launched The Alignment Project, ‘a global fund of over £15 million, dedicated to accelerating progress in AI alignment research’.

Interesting to see their partners include Canada AISI, Anthropic, UK ARIA, and Safe AI Fund (run by a former YC president).

On the other hand, £15m doesn’t seem huge (and presumably less than OpenPhil’s yearly AI safety spending).

Here’s their research agenda, which is surprisingly theoretical /​ broad, e.g. includes information theory, game theory, and cognitive science.

They’re also hiring for a ‘Strategy & Operations Associate’.

No comments.