RSS

Elliott Thornley (EJT)

Karma: 1,130

I work on AI alignment. Right now, I’m using ideas from decision theory to design and train safer artificial agents.

I also do work in ethics, focusing on the moral importance of future generations.

You can email me at thornley@mit.edu.

AI safety can be a Pas­cal’s mug­ging even if p(doom) is high

Elliott Thornley (EJT)25 Apr 2026 16:20 UTC
36 points
10 comments1 min readEA link

Prefer­ence gaps as a safe­guard against AI self-replication

Bradford Saad26 Nov 2025 14:57 UTC
16 points
0 comments11 min readEA link

Shut­down­able Agents through POST-Agency

Elliott Thornley (EJT)16 Sep 2025 12:10 UTC
18 points
0 comments54 min readEA link
(arxiv.org)

A Fis­sion Prob­lem for Per­son-Affect­ing Views (Elliott Thorn­ley)

Global Priorities Institute7 Nov 2024 15:01 UTC
20 points
2 comments3 min readEA link