I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.
Previously, I was a Philosophy Fellow at the Center for AI Safety.
So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.
You can email me at elliott.thornley@philosophy.ox.ac.uk
Thanks for the comment! In this context, where we’re arguing about whether sufficiently-advanced artificial agents will satisfy the VNM axioms, I only have to give up Decision-Tree Separability*:
And Decision-Tree Separability* isn’t particularly plausible. It’s false if any sufficiently-advanced artificial agent acts in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’ And it’s easy to see why agents might act in accordance with that policy: it makes them immune to money-pumps for Completeness.
Also, it seems as if one of the major downsides of resolute choice is that agents sometimes have to act against their preferences. But, as I argue in the post, artificial agents with incomplete preferences who act in accordance with the policy above will never have to act against their preferences.