Autonomous Systems @ UK AI Safety Institute (AISI)
DPhil AI Safety @ Oxford (Hertford college, CS dept, AIMS CDT)
Former senior data scientist and software engineer + SERI MATS
I’m particularly interested in sustainable collaboration and the long-term future of value. I’d love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read—let me know your suggestions! In no particular order, here are some I’ve enjoyed recently
Ord—The Precipice
Pearl—The Book of Why
Bostrom—Superintelligence
McCall Smith—The No. 1 Ladies’ Detective Agency (and series)
Melville—Moby-Dick
Abelson & Sussman—Structure and Interpretation of Computer Programs
Stross—Accelerando
Graeme—The Rosie Project (and trilogy)
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
Hanabi (can’t recommend enough; try it out!)
Pandemic (ironic at time of writing...)
Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)
People who’ve got to know me only recently are sometimes surprised to learn that I’m a pretty handy trumpeter and hornist.
Thanks for this thoughtful response!
This seems exactly right and is what I’m frustrated by. Though, further than you give credit (or un-credit) for, frequently I come across writing or talking about “US success in AI”, “US leading in AI”, or “China catching up to US”, etc. which are all almost nonsense as far as I’m concerned. What do those statements even mean? In good faith I hope for someone to describe what these sorts of claims mean in a way which clicks for me, but I have come to expect that there is probably none.
Do people actually think that Google+OpenAI+Anthropic (for sake of argument) are the US? Do they think the US government/military can/will appropriate those staff/artefacts/resources at some point? Are they referring to integration of contemporary ML/DS into the economy? The military? Or impacts on other indicators[1]? What do people mean by “China” here: CCP, Alibaba, Tencent, …? If people mean these things, they should say those things, or otherwise say what they do mean. Otherwise I think people motte-and-bailey themselves (and others) into some really strange understandings. There’s not some linear scoreboard which “US” and “China” have points on but people behave/talk like they actually think in those terms.
Thanks, this would indeed be too strong :) but it’s not what I mean. (Also thank you for the example bullets below that, for me and for other readers.)
I don’t mean to imply they have no influence on AI development and deployment[2]. What I meant by ‘not currently meaningful players in AI development and deployment’ was that, to date, governments have had little to no say in the course or nature of AI development. Rather, they have been mostly passive or unwitting passengers, with recent interventions (to date) comprising coarse economy-level lever-pulls, like your examples of regulation on chip production and sales. Can you think of a better compression of this than what I wrote? ‘currently mainly passive except for coarse interventions at the economy-level’?
The key difference between e.g. space-race or nuclear/ICBM etc. and AI is that in those cases, governments were appropriately thought of as somewhat-coherently instigating, steering and directing, and could be described as being key players with a real competition between them. With AI, none of those things are (currently) true. So ideally we would use different language to describe the different situations (especially because the misleading use of language is inflammatory).
I get exercised about this overall issue because on one model, this sort of failure of imagination and the confusion it gives rise to is exactly what leads to escalation and conflict, which I sense you agree on. We do not want sloppy foregone-conclusion thinking leading to WWIII with AI and nukes.
What indicators? Education, unemployment, privacy, health, productivity, democracy, inequality, …?
Ironically for a piece on bringing clarity through nuance, I evidently wasn’t clear enough about where I was drawing the boundaries in my initial post…