Not taking a side here, but couldn’t you get around this by framing your values as ‘maximizing sum of global utility’? This way there is no need to make a comparison between Joe and [absence of Joe]; I can simply say that Joe’s existence has caused my objective function to increase.
Not sure I follow. Are you assuming anti-realism about metaethics or something? Even so, if your assessment of outcomes depends, at least in part, on how good/bad those outcomes are for people, the problem remains.
Not taking a side here, but couldn’t you get around this by framing your values as ‘maximizing sum of global utility’? This way there is no need to make a comparison between Joe and [absence of Joe]; I can simply say that Joe’s existence has caused my objective function to increase.
Not sure I follow. Are you assuming anti-realism about metaethics or something? Even so, if your assessment of outcomes depends, at least in part, on how good/bad those outcomes are for people, the problem remains.