RSS

EJT

Karma: 928

I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk

A Fis­sion Prob­lem for Per­son-Affect­ing Views (Elliott Thorn­ley)

Global Priorities Institute7 Nov 2024 15:01 UTC
20 points
2 comments3 min readEA link

Towards shut­down­able agents via stochas­tic choice

EJT8 Jul 2024 10:14 UTC
26 points
1 comment1 min readEA link
(arxiv.org)

A non-iden­tity dilemma for per­son-af­fect­ing views (Elliott Thorn­ley)

Global Priorities Institute4 Apr 2024 16:30 UTC
13 points
3 comments3 min readEA link
(globalprioritiesinstitute.org)

My favourite ar­gu­ments against per­son-af­fect­ing views

EJT2 Apr 2024 10:57 UTC
81 points
33 comments17 min readEA link

Crit­i­cal-Set Views, Bio­graph­i­cal Iden­tity, and the Long Term

EJT28 Feb 2024 14:30 UTC
9 points
3 comments1 min readEA link
(philpapers.org)

The Shut­down Prob­lem: In­com­plete Prefer­ences as a Solution

EJT23 Feb 2024 16:01 UTC
26 points
0 comments1 min readEA link