Trying to make transformative AI go less badly for sentient beings, regardless of species and substrate
Interested in:
Sentience- & suffering-focused ethics; sentientism; painism; s-risks
Animal ethics & abolitionism
AI safety & governance
Activism, direct action & social change
Bio:
From London
BA in linguistics at the University of Cambridge
Almost five years in the British Army as an officer
MSc in global governance and ethics at University College London
One year working full time in environmental campaigning and animal rights activism at Plant-Based Universities / Animal Rising
Now pivoting to the (future) impact of AI on biologically and artifically sentient beings
Currently lead organiser of the AI, Animals, & Digital Minds conference in London in June 2025
I support PauseAI much more because I want to reduce the future probability and prevalence of intense suffering (including but not exclusively s-risk) caused by powerful AI, and much less because I want to reduce the risk of human extinction from powerful AI
However, couching demands for an AGI moratorium in terms of “reducing x-risk” rather than “reducing suffering” seems
More robust to the kind of backfire risk that suffering-focused people at e.g. CLR are worried about
More effective in communicating catastrophic AI risk to the public