RSS

EJT

Karma: 934

I’m a Research Fellow in Philosophy at Oxford University’s Global Priorities Institute.

I work on AI alignment. Right now, I’m using ideas from decision theory to design and train safer artificial agents.

I also do work in ethics, focusing on the moral importance of future generations.

You can email me at elliott.thornley@philosophy.ox.ac.uk.

The Shut­down Prob­lem: In­com­plete Prefer­ences as a Solution

EJT23 Feb 2024 16:01 UTC
26 points
0 comments42 min readEA link

The Shut­down Prob­lem: An AI Eng­ineer­ing Puz­zle for De­ci­sion Theorists

EJT23 Oct 2023 15:36 UTC
35 points
1 comment38 min readEA link
(philpapers.org)

The price is right

EJT16 Oct 2023 16:34 UTC
27 points
5 comments4 min readEA link
(openairopensea.substack.com)