RSS

Lloy2 šŸ”¹

Karma: 29

In catastrophic AI risk, extinction is only half the equation. The other half is ensuring a future that’s actually worth saving. I want to help ensure that a post-AGI world is a good one, not marred by moral disaster.

I will be doing my masters thesis on policy/​governance addressing catastrophic AI risk. I’m currently hoping to focus on preventing AI from exacerbating or locking in totalitarianism, perhaps particularly fascism.

I’ve also been running my university’s Buddhism and Meditation society for three years.

Best place to reach me is my email, lloydrb100@gmail.com