I understood your rejection of the total ordering on populations, and as I say, this is an idea that others have tried to apply to this problem before.
But the approach others have tried to take is to use the lack of a precise âbetter thanâ relation to evade the logic of the repugnant conclusion arguments, while still ultimately concluding that population Z is worse than population A. If you only conclude that Z is not worse than A, and A is not worse than Z (i.e. we should be indifferent about taking actions which transform us from world A to world Z), then a lot of people would still find that repugnant!
Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?
If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?
And thereâs another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.
I find something a bit strange about it being different when happening in reality to when happening in our heads. You could imagine people thinking
âShould we create a large population B at small positive welfare?â
âSure, it increases positive imaginary welfare and does nothing to real welfareâ
âBut once weâve done that, they will then be real, and so then we might want to boost their welfare at the expense of our own. Weâll end up with a huge population of people with lives barely worth living, that seems quite repugnant.â
âIt is repugnant, we shouldnât prioritise imaginary welfare over real welfare. Those people donât exist.â
âBut if we create them they will exist, so then we will end up deciding to move towards world Z. We should take action now to stop ourselves being able to do that in future.â
I find this situation of people being in conflict with their future selves quite strange. It seems irrational to me!
âOr are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?â
Yesâand the previous description you gave is not what I intended.
âIf so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?â
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
âAnd thereâs another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.â
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the âWLOGâ assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesnât have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/âimaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, âA priority is that the average welfare of future populations is at least as good as it is nowâ, which seems reasonable.
[Edit: actually, I think this doesnât entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are âforcedâ to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think itâs definitely important to note that the idea was designed to work for unforced decisions so I reckon itâs likely that assuming âforced populationsâ are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I donât deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I havenât spent much time thinking about them.
(Even the âAxiom of Comparisonâ, which I say is the most important part of the construction, may not be exactly the right approach. But I believe itâs on the right lines.)
I understood your rejection of the total ordering on populations, and as I say, this is an idea that others have tried to apply to this problem before.
But the approach others have tried to take is to use the lack of a precise âbetter thanâ relation to evade the logic of the repugnant conclusion arguments, while still ultimately concluding that population Z is worse than population A. If you only conclude that Z is not worse than A, and A is not worse than Z (i.e. we should be indifferent about taking actions which transform us from world A to world Z), then a lot of people would still find that repugnant!
Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?
If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?
And thereâs another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.
I find something a bit strange about it being different when happening in reality to when happening in our heads. You could imagine people thinking
âShould we create a large population B at small positive welfare?â
âSure, it increases positive imaginary welfare and does nothing to real welfareâ
âBut once weâve done that, they will then be real, and so then we might want to boost their welfare at the expense of our own. Weâll end up with a huge population of people with lives barely worth living, that seems quite repugnant.â
âIt is repugnant, we shouldnât prioritise imaginary welfare over real welfare. Those people donât exist.â
âBut if we create them they will exist, so then we will end up deciding to move towards world Z. We should take action now to stop ourselves being able to do that in future.â
I find this situation of people being in conflict with their future selves quite strange. It seems irrational to me!
âOr are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?â
Yesâand the previous description you gave is not what I intended.
âIf so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?â
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
âAnd thereâs another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.â
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the âWLOGâ assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesnât have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/âimaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, âA priority is that the average welfare of future populations is at least as good as it is nowâ, which seems reasonable.
[Edit: actually, I think this doesnât entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are âforcedâ to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think itâs definitely important to note that the idea was designed to work for unforced decisions so I reckon itâs likely that assuming âforced populationsâ are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I donât deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I havenât spent much time thinking about them.
(Even the âAxiom of Comparisonâ, which I say is the most important part of the construction, may not be exactly the right approach. But I believe itâs on the right lines.)