I think the obvious formulation is relevant to the point I was trying to make, in particular, I was trying to get ahead of what I think is a pretty common first reaction to the non-identity problem. That it is an interesting point, but also clearly too technical and academic to really undermine the theory in practice, so whatever it says it cares about, we should just modify the theory so that it doesn’t care about that. I think this is a natural first reaction, but also the non-identity problem raises genuine substantial issues that have stumped philosophers for decades, and just about any solution you come up with is going to have serious costs and/or revisions from a conventional person-affecting view, for instance if averagism is more superficially similar to person-affecting views (in terms of caring about quality of life rather than quantity), totalism is actually closer to person-affecting logic in practice (it is more intuitive that you in some sense can benefit someone by bringing them into a life worth living than that you benefit someone by making sure they aren’t born into a life worth living but less so than average), but these are the things totalism and averagism respectively can tradeoff against the welfare of those two worlds have in common. It wouldn’t surprise me if there was more promising work out there on this issue, you certainly seem better read on it than me, though it would surprise me if it really contradicted the point about serious costs and revisions I am trying to indicate.
I think the main costs for wide person-affecting views relative to narrow ones for someone who wanted to solve the nonidentity problem are in terms of justifiability (not seeming too ad hoc or arbitrary) and complexity in order to “match” merely possible people with different identities across possible worlds, as in the nondidentity problem. I think for someone set on solving both the nonidentity problem and holding person-affecting views, there will be views that will do intuitively better to them than the closest narrow person-affecting in basically all cases. What I’m imagining is that for most narrow views, there’s a wide modification of the view based on identifying counterparts across worlds that would just match their intuitions about cases better in some cases and never worse. I’m of course not 100% certain, but I expect this to usually approximately be the case.
I think the obvious formulation is relevant to the point I was trying to make, in particular, I was trying to get ahead of what I think is a pretty common first reaction to the non-identity problem. That it is an interesting point, but also clearly too technical and academic to really undermine the theory in practice, so whatever it says it cares about, we should just modify the theory so that it doesn’t care about that. I think this is a natural first reaction, but also the non-identity problem raises genuine substantial issues that have stumped philosophers for decades, and just about any solution you come up with is going to have serious costs and/or revisions from a conventional person-affecting view, for instance if averagism is more superficially similar to person-affecting views (in terms of caring about quality of life rather than quantity), totalism is actually closer to person-affecting logic in practice (it is more intuitive that you in some sense can benefit someone by bringing them into a life worth living than that you benefit someone by making sure they aren’t born into a life worth living but less so than average), but these are the things totalism and averagism respectively can tradeoff against the welfare of those two worlds have in common. It wouldn’t surprise me if there was more promising work out there on this issue, you certainly seem better read on it than me, though it would surprise me if it really contradicted the point about serious costs and revisions I am trying to indicate.
I think the main costs for wide person-affecting views relative to narrow ones for someone who wanted to solve the nonidentity problem are in terms of justifiability (not seeming too ad hoc or arbitrary) and complexity in order to “match” merely possible people with different identities across possible worlds, as in the nondidentity problem. I think for someone set on solving both the nonidentity problem and holding person-affecting views, there will be views that will do intuitively better to them than the closest narrow person-affecting in basically all cases. What I’m imagining is that for most narrow views, there’s a wide modification of the view based on identifying counterparts across worlds that would just match their intuitions about cases better in some cases and never worse. I’m of course not 100% certain, but I expect this to usually approximately be the case.