Hi, Toby. One of the core arguments here, which perhaps I didn’t fully illuminate, is that (I believe that) the “better than” operation is fundamentally unsuitable for population ethics.
If you are a member of A, then Z is not better than A. So Z is worse than A but only if you are a member of A. If you are a member of Z, you will find that A is not better than Z. So, sure, A is worse than Z but only if you are a member of Z. In other words, my population ethics depends on the population.
In particular, if you are a member of A, it’s not relevant that the population of Z disagree which is better. Indeed, why would you care? The fallacy that every argument re: Repugnant Conclusion commits is assuming that we require a total ordering of populations’ goodness. This is a complete red herring. We don’t. Doesn’t it suffice to know what is best for your particular population? Isn’t that the purpose of population ethics? I argue that the Repugnant Conclusion is merely the result of an unjustified fixation on a meaningless mathematical ideal (total ordering).
I said something similar in my reply to Max Daniel’s comment; I am not sure if I phrased it better here or there. If this idea was not clear in the post (or even in this reply), I would appreciate any feedback on how to make it more apparent.
I understood your rejection of the total ordering on populations, and as I say, this is an idea that others have tried to apply to this problem before.
But the approach others have tried to take is to use the lack of a precise “better than” relation to evade the logic of the repugnant conclusion arguments, while still ultimately concluding that population Z is worse than population A. If you only conclude that Z is not worse than A, and A is not worse than Z (i.e. we should be indifferent about taking actions which transform us from world A to world Z), then a lot of people would still find that repugnant!
Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?
If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?
And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.
I find something a bit strange about it being different when happening in reality to when happening in our heads. You could imagine people thinking
“Should we create a large population B at small positive welfare?”
“Sure, it increases positive imaginary welfare and does nothing to real welfare”
“But once we’ve done that, they will then be real, and so then we might want to boost their welfare at the expense of our own. We’ll end up with a huge population of people with lives barely worth living, that seems quite repugnant.”
“It is repugnant, we shouldn’t prioritise imaginary welfare over real welfare. Those people don’t exist.”
“But if we create them they will exist, so then we will end up deciding to move towards world Z. We should take action now to stop ourselves being able to do that in future.”
I find this situation of people being in conflict with their future selves quite strange. It seems irrational to me!
“Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?”
Yes—and the previous description you gave is not what I intended.
“If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?”
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
“And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.”
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the “WLOG” assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesn’t have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/imaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, “A priority is that the average welfare of future populations is at least as good as it is now”, which seems reasonable.
[Edit: actually, I think this doesn’t entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are “forced” to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think it’s definitely important to note that the idea was designed to work for unforced decisions so I reckon it’s likely that assuming “forced populations” are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I don’t deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I haven’t spent much time thinking about them.
(Even the “Axiom of Comparison”, which I say is the most important part of the construction, may not be exactly the right approach. But I believe it’s on the right lines.)
Hi, Toby. One of the core arguments here, which perhaps I didn’t fully illuminate, is that (I believe that) the “better than” operation is fundamentally unsuitable for population ethics.
If you are a member of A, then Z is not better than A. So Z is worse than A but only if you are a member of A. If you are a member of Z, you will find that A is not better than Z. So, sure, A is worse than Z but only if you are a member of Z. In other words, my population ethics depends on the population.
In particular, if you are a member of A, it’s not relevant that the population of Z disagree which is better. Indeed, why would you care? The fallacy that every argument re: Repugnant Conclusion commits is assuming that we require a total ordering of populations’ goodness. This is a complete red herring. We don’t. Doesn’t it suffice to know what is best for your particular population? Isn’t that the purpose of population ethics? I argue that the Repugnant Conclusion is merely the result of an unjustified fixation on a meaningless mathematical ideal (total ordering).
I said something similar in my reply to Max Daniel’s comment; I am not sure if I phrased it better here or there. If this idea was not clear in the post (or even in this reply), I would appreciate any feedback on how to make it more apparent.
I understood your rejection of the total ordering on populations, and as I say, this is an idea that others have tried to apply to this problem before.
But the approach others have tried to take is to use the lack of a precise “better than” relation to evade the logic of the repugnant conclusion arguments, while still ultimately concluding that population Z is worse than population A. If you only conclude that Z is not worse than A, and A is not worse than Z (i.e. we should be indifferent about taking actions which transform us from world A to world Z), then a lot of people would still find that repugnant!
Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?
If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?
And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.
I find something a bit strange about it being different when happening in reality to when happening in our heads. You could imagine people thinking
“Should we create a large population B at small positive welfare?”
“Sure, it increases positive imaginary welfare and does nothing to real welfare”
“But once we’ve done that, they will then be real, and so then we might want to boost their welfare at the expense of our own. We’ll end up with a huge population of people with lives barely worth living, that seems quite repugnant.”
“It is repugnant, we shouldn’t prioritise imaginary welfare over real welfare. Those people don’t exist.”
“But if we create them they will exist, so then we will end up deciding to move towards world Z. We should take action now to stop ourselves being able to do that in future.”
I find this situation of people being in conflict with their future selves quite strange. It seems irrational to me!
“Or are you saying that your theory tells us not to transform ourselves to world Z? Because we should only ever do anything that will make things actually better?”
Yes—and the previous description you gave is not what I intended.
“If so, how would your approach handle uncertainty? What probability of a world Z should we be willing to risk in order to improve a small amount of real welfare?”
This is a reasonable question, but I do not think this is a major issue so I will not necessarily answer it now.
“And there’s another way in which your approach still contains some form of the repugnant conclusion. If a population stopped dealing in hypotheticals and actually started taking actions, so that these imaginary people became real, then you could imagine a population going through all the steps of the repugnant conclusion argument process, thinking they were making improvements on the status quo each time, and finding themselves ultimately ending up at Z. In fact it can happen in just two steps, if the population of B is made large enough, with small enough welfare.”
This is true, and I noticed this myself. However, actually, this comes from the assumption that more net imaginary welfare is always a good thing, which was one of the “WLOG” assumptions I made not needed for the refutation of the Repugnant Conclusion. If we instead take an averaging or more egalitarian approach with imaginary welfare, I think the problem doesn’t have to appear.
For instance, suppose we now stipulate that any decision (given the constraints on real welfare) that has average welfare for the imaginary population at least equal to the average of the real population is better than any decision without this property, then the problem is gone. (Remark: we still do need the real/imaginary divide here to avoid the Repugnant Conclusion.)
This may seem rather ad hoc, and it is, but it could be framed as, “A priority is that the average welfare of future populations is at least as good as it is now”, which seems reasonable.
[Edit: actually, I think this doesn’t entirely work in that if you are forced to pick between two populations, as in your other example, you may get the same scenario as you describe.
Edit 2: Equally, if you are “forced” to make a decision, then perhaps those some of those people should be considered as real in a sense since people are definitely coming into existence, one way or another. I think it’s definitely important to note that the idea was designed to work for unforced decisions so I reckon it’s likely that assuming “forced populations” are imaginary is not the correct generalisation.]
As I say, I had noticed this particular issue myself (when I first wrote this post, in fact). I don’t deny that the construction, in its current state, is flawed. However, to me, these flaws seem generally less severe and more tractable (Would you disagree? Just curious.) - and so, I haven’t spent much time thinking about them.
(Even the “Axiom of Comparison”, which I say is the most important part of the construction, may not be exactly the right approach. But I believe it’s on the right lines.)