RSS

EJT

Karma: 875

I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.

Previously, I was a Philosophy Fellow at the Center for AI Safety.

So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.

You can email me at elliott.thornley@philosophy.ox.ac.uk

How much should gov­ern­ments pay to pre­vent catas­tro­phes? Longter­mism’s limited role

EJT19 Mar 2023 16:50 UTC
258 points
35 comments35 min readEA link
(philpapers.org)

There are no co­her­ence theorems

EJT20 Feb 2023 21:52 UTC
104 points
49 comments19 min readEA link

My favourite ar­gu­ments against per­son-af­fect­ing views

EJT2 Apr 2024 10:57 UTC
77 points
32 comments17 min readEA link

The Im­pos­si­bil­ity of a Satis­fac­tory Pop­u­la­tion Prospect Axiology

EJT12 May 2021 15:35 UTC
36 points
12 comments1 min readEA link
(link.springer.com)

The Shut­down Prob­lem: An AI Eng­ineer­ing Puz­zle for De­ci­sion Theorists

EJT23 Oct 2023 15:36 UTC
35 points
1 comment38 min readEA link
(philpapers.org)

The price is right

EJT16 Oct 2023 16:34 UTC
27 points
5 comments4 min readEA link
(openairopensea.substack.com)

The Shut­down Prob­lem: In­com­plete Prefer­ences as a Solution

EJT23 Feb 2024 16:01 UTC
26 points
0 comments1 min readEA link