I’m a Postdoctoral Research Fellow at Oxford University’s Global Priorities Institute.
Previously, I was a Philosophy Fellow at the Center for AI Safety.
So far, my work has mostly been about the moral importance of future generations. Going forward, it will mostly be about AI.
You can email me at elliott.thornley@philosophy.ox.ac.uk
Nice point, but I think it comes at a serious cost.
To see how, consider a different case. In X, ten billion people live awful lives. In Y, those same ten billion people live wonderful lives. Clearly, Y is much better than X.
Now consider instead Y* which is exactly the same as Y except that we also add one extra person, also with a wonderful life. As before, Y* is much better than X for the original ten billion people. If we say that the value of adding the extra person is undefined and that this undefined value renders the value of the whole change from X to Y* undefined, we get the implausible result that Y* is not better than X. Given plausible principles linking betterness and moral requirements, we get the result that we’re permitted to choose X over Y*. That seems very implausible, and so it counts against the claim that adding people results in undefined comparisons.