I have particular expertise in:
- Developing and implementing policy
- Improving decision making within organisations, mainly focused on improving people’s reasoning process (e.g. predictive/forecasting) that underpins how they make and communicate judgments.
- AI Safety policy
This has been achieved through being:
1) Director of Daymark Decision Insights, a company which provides consultative services and tailor made workshops related to improving decision making and reasoning processes to high-impact organisations (https://www.daymark-di.com/). More recently I’ve provided specific consultative services on policy development and advocacy to a large-scale AI safety organisation.
2) Director of Impactful Government Careers—an organisation focused on helping individuals find, secure, and excel in high impact civil service roles.
3) I spent 5 years working in the heart of the UK Government, with 4 of those at HM Treasury. With roles including:
- Head of Development Policy, HM Treasury
- Head of Strategy, Centre for Data Ethics and Innovation
- Senior Policy Advisor, Strategy and Spending for Official Development Assistance, HM Treasury
These roles have involved: advising UK Ministers on policy, spending, and strategy issues relating to international development; assessing the value for money of proposed high-value development projects; developing the 2021 CDEI Strategy and leading the organisational change related to this.
4) I’ve completed an MSc in Cognitive and Decision Sciences at UCL, where I have focused my research on probabilistic reasoning and improving individual and group decision-making processes. My final research project involved an experimental study into whether a short course (2 hours) on Bayesian reasoning could improve individual’s single-shot accuracy when forecasting geopolitical events.
I tend to agree with you, though would rather people were more on the “close early” side of the coin than the “hold out”. Simply because the sunk cost fallacy and confirmation bias in your own idea is incredibly strong and I see no compelling reason for how current funders in the EA space help counteract these (beyond maybe being aware of them more than the average funder).
In an ideal system the funders should be driving most of these decisions by requiring clear milestones and evaluation processes for who they fund. If the funder did this they would be able to identify predictive signals of success and help avoid early or late closures (e.g. “we see on average policy advocacy groups that have been successful have met fewer/more comparable milestones and recommend continued/stopping funding”). This can still allow the organisation to pitch for why they are outside of the average, but the funder should be in the best position to know what is signalling success and what isn’t.
Unfortunately I don’t see such a system and I fear the incentives aren’t aligned in the EA ecosystem to create it. The organisations getting funded enjoy the looser, less funder involved setup. And funders de-risk their reputational risk by not properly evaluating what is working and why, and they can continue funding projects they are personally interested in but have questionable causal impact chains. *noting I think EA GHD has much less of this issue mainly because funders anchor on GiveWell assessments which is to a large degree delivering the mechanism I outline above.