substack = nwprtnarrative.substack.com
Executive Director of the Swift Centre for Applied Forecasting (led projects with U.K. Gov., Google DeepMind, and on AI security and capability risks).
Co-founder of ‘Looking for Growth’ - a political movement for growth in the U.K.
CTO of Praxis—a AI led assessment platform for schools
Former Head of Policy at ControlAI (co-authored ‘A Narrow Path’)
Former Director of Impactful Government Careers
Former Head of Development Policy at HM Treasury
Former Head of Strategy at the Centre for Data Ethics and Innovation
Former Senior Policy Advisor at HM Treasury, leading on the economic and financial response to the war in Ukraine, and the modelling and allocation of the UK’s ‘Official Development Assistance’ budget.
MSc in Cognitive and Decision Sciences from UCL, my dissertation was an experimental study using Bayesian reasoning to improve predictive reasoning and forecasting in U.K. public policy officials and analysts
This obviously assumes Marcus has a sufficient level of experience to justify the claims. Which I think, given other comments, can be adequately challenged.
It would be good to know what metric/threshold/examples would be taken as forecasting delivering adequate impact to justify funding. From examples in this thread alone, we can see senior government decision makers in both the U.K. (including Ministerial teams and critical committees) and US, frontier labs safety teams, and philanthropic funds moving tens of millions of dollars a year) have utilised forecasting (either the process or the outputs) to inform their decisions.
The argument of it only shifting a decision 1-2% is totally fair. But to keep consistent I’d expect the same people who make that argument to also be highly sceptical of the vast majority of research funding.