“Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?”
Yes—and the previous description you gave is not what I intended.
“If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?”
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
“And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.”
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the “WLOG” assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesn’t have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/imaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, “A priority is that the average welfare of future populations is at least as good as it is now”, which seems reasonable.
[Edit: actually, I think this doesn’t entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are “forced” to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think it’s definitely important to note that the idea was designed to work for unforced decisions so I reckon it’s likely that assuming “forced populations” are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I don’t deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I haven’t spent much time thinking about them.
(Even the “Axiom of Comparison”, which I say is the most important part of the construction, may not be exactly the right approach. But I believe it’s on the right lines.)
“Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?”
Yes—and the previous description you gave is not what I intended.
“If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?”
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
“And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.”
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the “WLOG” assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesn’t have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/imaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, “A priority is that the average welfare of future populations is at least as good as it is now”, which seems reasonable.
[Edit: actually, I think this doesn’t entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are “forced” to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think it’s definitely important to note that the idea was designed to work for unforced decisions so I reckon it’s likely that assuming “forced populations” are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I don’t deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I haven’t spent much time thinking about them.
(Even the “Axiom of Comparison”, which I say is the most important part of the construction, may not be exactly the right approach. But I believe it’s on the right lines.)