Executive Director of the Swift Centre for Applied Forecasting (led projects with U.K. Gov., Google DeepMind, and on AI security and capability risks).
Co-founder of ‘Looking for Growth’ - a political movement for growth in the U.K.
CTO of Praxis—a AI led assessment platform for schools
Former Head of Policy at ControlAI (co-authored ‘A Narrow Path’)
Former Director of Impactful Government Careers
Former Head of Development Policy at HM Treasury
Former Head of Strategy at the Centre for Data Ethics and Innovation
Former Senior Policy Advisor at HM Treasury, leading on the economic and financial response to the war in Ukraine, and the modelling and allocation of the UK’s ‘Official Development Assistance’ budget.
MSc in Cognitive and Decision Sciences from UCL, my dissertation was an experimental study using Bayesian reasoning to improve predictive reasoning and forecasting in U.K. public policy officials and analysts
Some excellent reflections here. Across all advocacy, aid, animal welfare and especially AI, I often see errors in understanding the actual incentives and interests of those with power in Government.
A lot of time is also spent advocating to people who have little to no tangible influence. Which leads to organisations claiming they’ve delivered a lot of “direct advocacy” but actually they’ve just spoken to a lot of people, very few of which have the autonomy or power to enact any change.
As someone who worked at the very centre of the U.K. government for c.5 years on international development and finance policy, and time on AI policy (including roles as Head of Development Policy and Senior Policy Advisor for ODA Strategy and Spending at HM Treasury, and Head of Strategy at the CDEI), I’m always happy to share my thoughts directly if ever helpful.