Thanks for this, I found this really useful! Will be referring back to it quite a bit I imagine.
I would say researchers working on AI governance at the Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, University of Cambridge (where I work) would agree with a lot of your framing of the risks, pathways, and theory of impact.
Personally, I find it helpful to think about our strategy under four main points (which I think has a lot in common with the ‘field-building model’):
1. Understand—study and better understand risks and impacts.
2. Solutions—develop ideas for solutions, interventions, strategies and policies in collaboration with policy-makers and technologists.
3. Impact—implement those strategies through extensive engagement.
4. Field-build—foster a global community of academics, technologists and policy-makers working on these issues.
Thanks for this, I found this really useful! Will be referring back to it quite a bit I imagine.
I would say researchers working on AI governance at the Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, University of Cambridge (where I work) would agree with a lot of your framing of the risks, pathways, and theory of impact.
Personally, I find it helpful to think about our strategy under four main points (which I think has a lot in common with the ‘field-building model’):
1. Understand—study and better understand risks and impacts.
2. Solutions—develop ideas for solutions, interventions, strategies and policies in collaboration with policy-makers and technologists.
3. Impact—implement those strategies through extensive engagement.
4. Field-build—foster a global community of academics, technologists and policy-makers working on these issues.