RSS

EJT

Karma: 933

I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk

A Fis­sion Prob­lem for Per­son-Affect­ing Views (Elliott Thorn­ley)

Global Priorities InstituteNov 7, 2024, 3:01 PM
20 points
2 comments3 min readEA link

Towards shut­down­able agents via stochas­tic choice

EJTJul 8, 2024, 10:14 AM
26 points
1 comment1 min readEA link
(arxiv.org)

A non-iden­tity dilemma for per­son-af­fect­ing views (Elliott Thorn­ley)

Global Priorities InstituteApr 4, 2024, 4:30 PM
13 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

My favourite ar­gu­ments against per­son-af­fect­ing views

EJTApr 2, 2024, 10:57 AM
84 points
36 comments17 min readEA link

Crit­i­cal-Set Views, Bio­graph­i­cal Iden­tity, and the Long Term

EJTFeb 28, 2024, 2:30 PM
9 points
3 comments1 min readEA link
(philpapers.org)