RSS

EJT

Karma: 934

I’m a Research Fellow in Philosophy at Oxford University’s Global Priorities Institute.

I work on AI alignment. Right now, I’m using ideas from decision theory to design and train safer artificial agents.

I also do work in ethics, focusing on the moral importance of future generations.

You can email me at elliott.thornley@philosophy.ox.ac.uk.

A Fis­sion Prob­lem for Per­son-Affect­ing Views (Elliott Thorn­ley)

Global Priorities InstituteNov 7, 2024, 3:01 PM
20 points
2 comments3 min readEA link

Towards shut­down­able agents via stochas­tic choice

EJTJul 8, 2024, 10:14 AM
26 points
1 comment23 min readEA link
(arxiv.org)

A non-iden­tity dilemma for per­son-af­fect­ing views (Elliott Thorn­ley)

Global Priorities InstituteApr 4, 2024, 4:30 PM
13 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

My favourite ar­gu­ments against per­son-af­fect­ing views

EJTApr 2, 2024, 10:57 AM
84 points
36 comments17 min readEA link

Crit­i­cal-Set Views, Bio­graph­i­cal Iden­tity, and the Long Term

EJTFeb 28, 2024, 2:30 PM
9 points
3 comments1 min readEA link
(philpapers.org)