RSS

Lloyd Rhodes-Brandon 🔹

Karma: 33

In catastrophic AI risk, extinction is only half the equation. The other half is ensuring a future that’s actually worth saving. I want to help ensure that a post-AGI world is a good one, not marred by moral disaster.

I co-founded the Nottingham AI Safety Initiative, where I’ve run an AI Governance and Strategy Fellowship as well as several AI safety events.

I will be doing my masters thesis on policy/​governance addressing catastrophic AI risk. I’m currently hoping to focus on preventing AI from exacerbating or locking in totalitarianism, perhaps particularly fascism.

I’ve also been running my university’s Buddhism and Meditation society for three years.

Best place to reach me is my email, lloydrb100@gmail.com

AI Ba­sics: Thrills or Chills?

Lloyd Rhodes-Brandon 🔹7 Jan 2026 15:45 UTC
2 points
0 comments1 min readEA link

Some AI safety pro­ject & re­search ideas/​ques­tions for short and long timelines

Lloyd Rhodes-Brandon 🔹8 Aug 2025 21:08 UTC
13 points
0 comments5 min readEA link

Democratis­ing AI Align­ment: Challenges and Proposals

Lloyd Rhodes-Brandon 🔹5 May 2025 14:50 UTC
2 points
2 comments4 min readEA link

Sen­tience-Based Align­ment Strate­gies: Should we try to give AI gen­uine em­pa­thy/​com­pas­sion?

Lloyd Rhodes-Brandon 🔹4 May 2025 20:45 UTC
16 points
1 comment3 min readEA link