Autonomous Systems @ UK AI Safety Institute (AISI)
DPhil AI Safety @ Oxford (Hertford college, CS dept, AIMS CDT)
Former senior data scientist and software engineer + SERI MATS
I’m particularly interested in sustainable collaboration and the long-term future of value. I’d love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read—let me know your suggestions! In no particular order, here are some I’ve enjoyed recently
Ord—The Precipice
Pearl—The Book of Why
Bostrom—Superintelligence
McCall Smith—The No. 1 Ladies’ Detective Agency (and series)
Melville—Moby-Dick
Abelson & Sussman—Structure and Interpretation of Computer Programs
Stross—Accelerando
Graeme—The Rosie Project (and trilogy)
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
Hanabi (can’t recommend enough; try it out!)
Pandemic (ironic at time of writing...)
Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)
People who’ve got to know me only recently are sometimes surprised to learn that I’m a pretty handy trumpeter and hornist.
I appreciate this discussion a lot. Two things which stand out to me as deserving more emphasis.
First though, quickly framing ‘good epistemic outcomes’ as something like a product of ‘people trying to understand clearly’ and ‘people can do that effectively’. (Of course these are interrelated, because people’s willingness is obviously affected by the practicalities—more on that in point 2.)
OK, the things:
It looks to me like most of the object-level task of collective epistemics is the checking and generally piecing together good ‘secondary research’ (broadly construed). i.e. looking at provenance, tracking the evidence and reasoning dependencies for a claim, proactively gathering the best arguments for and against, reasons to downweight certain testimony etc.
Why? Almost all our information about our environment beyond our direct sensory access is mediated through highly iterated message passing, reinterpretation, aggregation, and so on—especially in the heights of science and the depths (!) of political/influence goings-on
AI enables this (The Good) not so much (directly) by ‘knowing’ more or having ‘more insights’, but rather by hugely expanding the availability of clerical checking, tracing, and knowledge mapping work!
You kind of talk about this in the collective epistemics discussion, but I think it warrants more
Most of the overall task of collective epistemics may be in the motivating i.e. having more people more of the time actually trying to understand things with accuracy, rather than retreating into one or other alternative cognitive mode
The usual label I use for alternative cognitive modes is ‘tribal cognition’, where most of what’s said and recounted (and even believed), especially (but not even only) about what’s outside of the immediate sensory environment, is in service of building and maintaining allegiances and coalitions
When is ‘tribal cognition’ incentivised? I don’t fully know, but it has to do with
When people are/feel threatened, they reach for affiliations which offer (perhaps passing or merely apparent) security
Abusers can play on this by a combination of bigging up threats and presenting as a effective and sympathetic
When the epistemic environment is difficult true perception is more difficult and less rewarded
Abusers can push this. In politics: flood the zone, firehose of falsehoods, FUD. In science: p-hacking, importance-hacking, conflating/obscuring methodologies.
Generally adding noise and more convincing fake content undermines The Good above, the ability to check and trace, not by making people believe the fake stuff but by making them correctly recognise that it’s hard to tell at all (thus ‘retreat’)
Certain coalition norms can encourage epistemic insularity and discourage (genuine) scrutiny
I think you’re touching on this in The Ugly, ‘undermine sense-making’. To me it’s possibly ‘most of the problem’! Or at least, understanding under what conditions people mobilise one or other cognitive intents in sensemaking, and how those conditions can be influenced is a really big part of the picture here.