Epistemic status: just my own thoughts&ideas, so it might well be, that this is rubbish. I do believe, however, that this is a crucial consideration when evaluating the plausibility of Average Utilitarianism.
TLDR:
Christian Tarsney argued that Average Utilitarians should be egoists
I refute his argument, using the Self-Indication Assumption
This is mainly a response to Tarsney´s post/paper, in which he argues against Average Utilitarianism. If you haven’t read it, you should be able to understand this post perfectly well, as I go through the main points of his arguments. This post might be relevant for people involved in Global Priorities Research, and people who just generally care about which moral theories are plausible. A lively discussion of Tarsney’s paper can be found here.
Introduction
How ought we to act, under the consideration, that we might be the only real person in the universe? Most moral theories let us neglect the small probabilities of such absurd considerations. Tarsney discovered that Average Utilitarianism gives huge weight to such considerations of solipsism. He called this effect solipsistic swamping. In this post, I show that the effect of solipsistic swamping is exactly canceled out by taking anthropic evidence into account using the Self-Indication Assumption. First, I am going to recount the argument for solipsistic swamping and lay out the Self-Indication Assumption. Then I am going to calculate the way in which these two considerations cancel out.
Solipsistic swamping- a challenge to average utilitarianism
Solipsism is the belief, that one is the only real person in the universe. Average Utilitarianism is a moral theory, that proscribes the maximization of the average wellbeing. There are different versions of Average Utilitarianism, that differ from each other in the way that ``average wellbeing″ is defined (Hurka,1982). In comparison to Total Utilitarianism, Average Utilitarianism has the nice property, that it avoids the Repugnant Conclusion. This means, that Average Utilitarianism, in comparison to total Utilitarianism, does not prefer a huge number of people, whose life is barely worth living, over a lot of people who lead happy lives (Parfit, 1984 P 420).) In his paper ``Average Utilitarianism Implies Solipsistic Egoism″ Christian J. Tarsney finds an interesting flaw in Average Utilitarianism: Even tiny credence in solipsism leads to expected average utility maximizers to act egoistically. This property is called solipsistic swamping (Tarsney, 2020). Let us see how solipsistic swamping plays out numerically, by taking a look at Alice the average utility maximizer, and Tom the total utility maximizer. Let cs be the credence of Alice and Tom in solipsism. Tarsney does a Fermi estimate of a rational credence in solipsism and comes to the conclusion that it should lie somewhere between 10−1 and 10−9. Let furthermore Np be the number of people there is in total over the course of human history. Tarsney estimates this number to be beyond 1011 and many orders of magnitude bigger, when we consider non-human animals. Alice and Tom now stand before the choice of giving themself 1 quantity of wellbeing or give other people Ualt quantities of wellbeing. How will they act? To see what Tom will do, we have to calculate the expected total utility of our action alternatives: Ut(ego)=(1−cs)⋅1+cs⋅1Ut(alt)=(1−cs)⋅(Ualt)+cs⋅0Ut(alt)>Ut(ego)⇔Ualt>11−cs
This calculation computes the expected total utilities of Tom acting selfishly Ut(ego) and Tom acting altruistically Ut(alt). This is done by multiplying the probability that solipsism is true cs /false 1−cs with the corresponding utility generated in these eventualities. Tom will do the action with the higher expected total utility.
So Tom will act altruistically as long as the wellbeing he provides others is bigger than the reciprocal value of his credence that solipsism is false. For small credence in solipsism, this leads to him valuing his own wellbeing only slightly higher than the wellbeing of others. To see what alice will do, we have to do the same with the average utility: Ua(ego)=(1−cs)⋅1Np+cs⋅11Ua(alt)=(1−cs)⋅UaltNp+cs⋅01Ua(alt)>Ua(ego)⇔Ualt>1+Npcs1−cs Here, I have computed the expected average utilities of her actions.
So Alice will only act altruistically if the wellbeing she can provide others exceeds the threshold of 1+Npcs1−cs, which is really high in a highly-populated universe even for very low credence in solipsism. Tarsney shows, that with these estimated values, Alice would rather provide herself 1 unit of wellbeing than provide 1000 other people with 1000 units of wellbeing. This can be seen as an argument against Average Utilitarianism or for egoism. Finding another good argument against solipsism might for now avoid solipsistic swamping by reducing cs , but if we ever would find large alien civilizations, or discover that grass is sentient, we would have to return to egoism. This prospect seems absurd. The only thing that can counteract solipsistic swamping is an argument against solipsism, that scales with the number of persons in the universe.
The Self-Indication Assumption- reasoning under anthropic bias
The Self-Indication Assumption is one of several competing methods of anthropic reasoning. This means reasoning about things, that had an influence on your existence. The Self-Indication Assumption, in a nutshell, says:
Reason as though you were a randomly selected person out of all possible persons.
Its main competitor is the Self-Sampling Assumption, which says the following:
Reason as though you were a randomly selected person out of all actual persons.
(Bostrom,2003) To see how this difference pans out in practice, let us look at an example: Imagine God revealed to you, that she flipped a fair coin at the beginning of the universe. If the coin would have come up heads, she would have created a universe with one planet that has a civilization. If the coin would have come up tails, she would have created 99 planets that have civilizations, but which are billions of light-years apart. What should your credence be, that the coin has come up tales? Using the Self-Sampling Assumption, you should arrive at a credence of 50%. The coin is a fair coin, and you have no further evidence to go on. Using the Self-Indication Assumption, you should arrive at a credence of 99%. Assuming that all Civilisations are equally populated, 99 of 100 possible persons live in a universe, where God has thrown tails. One advantage of the Self-Indication Assumption is, that it refutes the Doomsday Argument, which reasons just from anthropic evidence, that the world is likely going to end soon (Bostrom,2003).
Calculating Solipsistic swamping with the Self-Indication Assumption
In his estimations, Tarsney does not take any anthropic evidence into account, when calculating his credence in solipsism. By doing this, he implicitly assumes the Self-Sampling Assumption. Let us now look where it leads us if we assume the Self-Indication Assumption instead. Let us say, we have a credence of cs in solipsism, before taking into account any anthropic evidence. If solipsism is true, there is only 1 person, and if solipsism is false, there are Np persons. From the observation, that we are indeed a person ourselves, this leads us to a credence of cs⋅1cs⋅1+(1−cs)⋅Np=11+Npcs−Np≈csNp in solipsism. Let us now see, how Alice and Tom are choosing under these conditions between the selfish act and the altruistic act. Ut(ego)=(1−11+Npcs−Np)⋅1+11+Npcs−Np⋅1Ut(alt)=(1−11+Npcs−Np)⋅(Ualt)+11+Npcs−Np⋅0Ut(alt)>Ut(ego)⇔Ualt>1+1Npcs1−cs Tom the total utilitarian still chooses the altruistic action as long as Ualt is bigger than 1 and his initial credence in solipsism is not too high. Ua(ego)=(1−11+Npcs−Np)⋅1Np+11+Npcs−Np⋅11Ua(alt)=(1−11+Npcs−Np)⋅UaltNp+11+Npcs−Np⋅01Ua(alt)>Ua(ego)⇔Ualt>1+cs1−cs Alice the average utilitarian now also chooses the altruistic choices for reasonably small initial credences in Solipsism. Her choice does not even depend on the total number of persons in the universe. This means that finding alien civilizations or grass sentience does not impact Alice`s practical level of altruism, which is a nicely intuitive property.
Conclusion
Solipsistic swamping shows clearly that average utilitarians should behave egoistically under the Self-Sampling Assumption. If one takes the Self-Indication Assumption, on the other hand, this conclusion can be refuted. This is a solid rebuttal for any average utilitarians who do not want to adhere to a moral theory that preaches egoism. I find the elegance of the way the numbers work out in exactly such a way, that the total number of persons cancel out, astonishing. This reminds me of the elegant way the Self Indication assumption refutes the doomsday argument and might be seen as evidence for the Self-Indication Assumption.
Literature
Christian J. Tarsney (2020): ``Average Utilitarianism Implies Solipsistic Egoism″
Nick Bostrom, Milan M. Ćirković (2003): ``The Doomsday Argument and the Self-Indication Assumption: Reply to Olum″, The Philosophical Quarterly, Volume 53
Derek Parfit (1984): ``Reasons and Persons″, published: Oxford University Press
T. M. Hurka (1982): ``Average Utilitarianisms″Analysis, Vol. 42 , published: Oxford University Press
Saving Average Utilitarianism from Tarsney - Self-Indication Assumption cancels solipsistic swamping.
Epistemic status: just my own thoughts&ideas, so it might well be, that this is rubbish. I do believe, however, that this is a crucial consideration when evaluating the plausibility of Average Utilitarianism.
TLDR:
Christian Tarsney argued that Average Utilitarians should be egoists
I refute his argument, using the Self-Indication Assumption
This is mainly a response to Tarsney´s post/paper, in which he argues against Average Utilitarianism. If you haven’t read it, you should be able to understand this post perfectly well, as I go through the main points of his arguments. This post might be relevant for people involved in Global Priorities Research, and people who just generally care about which moral theories are plausible. A lively discussion of Tarsney’s paper can be found here.
Introduction
How ought we to act, under the consideration, that we might be the only real person in the universe? Most moral theories let us neglect the small probabilities of such absurd considerations. Tarsney discovered that Average Utilitarianism gives huge weight to such considerations of solipsism. He called this effect solipsistic swamping. In this post, I show that the effect of solipsistic swamping is exactly canceled out by taking anthropic evidence into account using the Self-Indication Assumption.
First, I am going to recount the argument for solipsistic swamping and lay out the Self-Indication Assumption. Then I am going to calculate the way in which these two considerations cancel out.
Solipsistic swamping- a challenge to average utilitarianism
Solipsism is the belief, that one is the only real person in the universe. Average Utilitarianism is a moral theory, that proscribes the maximization of the average wellbeing. There are different versions of Average Utilitarianism, that differ from each other in the way that ``average wellbeing″ is defined (Hurka,1982).
In comparison to Total Utilitarianism, Average Utilitarianism has the nice property, that it avoids the Repugnant Conclusion. This means, that Average Utilitarianism, in comparison to total Utilitarianism, does not prefer a huge number of people, whose life is barely worth living, over a lot of people who lead happy lives (Parfit, 1984 P 420).)
In his paper ``Average Utilitarianism Implies Solipsistic Egoism″ Christian J. Tarsney finds an interesting flaw in Average Utilitarianism: Even tiny credence in solipsism leads to expected average utility maximizers to act egoistically. This property is called solipsistic swamping (Tarsney, 2020).
Let us see how solipsistic swamping plays out numerically, by taking a look at Alice the average utility maximizer, and Tom the total utility maximizer. Let cs be the credence of Alice and Tom in solipsism. Tarsney does a Fermi estimate of a rational credence in solipsism and comes to the conclusion that it should lie somewhere between 10−1 and 10−9. Let furthermore Np be the number of people there is in total over the course of human history. Tarsney estimates this number to be beyond 1011 and many orders of magnitude bigger, when we consider non-human animals.
Alice and Tom now stand before the choice of giving themself 1 quantity of wellbeing or give other people Ualt quantities of wellbeing. How will they act?
To see what Tom will do, we have to calculate the expected total utility of our action alternatives:
Ut(ego)=(1−cs)⋅1+cs⋅1Ut(alt)=(1−cs)⋅(Ualt)+cs⋅0Ut(alt)>Ut(ego)⇔Ualt>11−cs
This calculation computes the expected total utilities of Tom acting selfishly Ut(ego) and Tom acting altruistically Ut(alt). This is done by multiplying the probability that solipsism is true cs /false 1−cs with the corresponding utility generated in these eventualities. Tom will do the action with the higher expected total utility.
So Tom will act altruistically as long as the wellbeing he provides others is bigger than the reciprocal value of his credence that solipsism is false. For small credence in solipsism, this leads to him valuing his own wellbeing only slightly higher than the wellbeing of others.
To see what alice will do, we have to do the same with the average utility:
Ua(ego)=(1−cs)⋅1Np+cs⋅11Ua(alt)=(1−cs)⋅UaltNp+cs⋅01Ua(alt)>Ua(ego)⇔Ualt>1+Npcs1−cs
Here, I have computed the expected average utilities of her actions.
So Alice will only act altruistically if the wellbeing she can provide others exceeds the threshold of 1+Npcs1−cs, which is really high in a highly-populated universe even for very low credence in solipsism.
Tarsney shows, that with these estimated values, Alice would rather provide herself 1 unit of wellbeing than provide 1000 other people with 1000 units of wellbeing.
This can be seen as an argument against Average Utilitarianism or for egoism.
Finding another good argument against solipsism might for now avoid solipsistic swamping by reducing cs , but if we ever would find large alien civilizations, or discover that grass is sentient, we would have to return to egoism. This prospect seems absurd. The only thing that can counteract solipsistic swamping is an argument against solipsism, that scales with the number of persons in the universe.
The Self-Indication Assumption- reasoning under anthropic bias
The Self-Indication Assumption is one of several competing methods of anthropic reasoning. This means reasoning about things, that had an influence on your existence.
The Self-Indication Assumption, in a nutshell, says:
Reason as though you were a randomly selected person out of all possible persons.
Its main competitor is the Self-Sampling Assumption, which says the following:
Reason as though you were a randomly selected person out of all actual persons.
(Bostrom,2003)
To see how this difference pans out in practice, let us look at an example:
Imagine God revealed to you, that she flipped a fair coin at the beginning of the universe. If the coin would have come up heads, she would have created a universe with one planet that has a civilization. If the coin would have come up tails, she would have created 99 planets that have civilizations, but which are billions of light-years apart. What should your credence be, that the coin has come up tales?
Using the Self-Sampling Assumption, you should arrive at a credence of 50%. The coin is a fair coin, and you have no further evidence to go on.
Using the Self-Indication Assumption, you should arrive at a credence of 99%. Assuming that all Civilisations are equally populated, 99 of 100 possible persons live in a universe, where God has thrown tails.
One advantage of the Self-Indication Assumption is, that it refutes the Doomsday Argument, which reasons just from anthropic evidence, that the world is likely going to end soon (Bostrom,2003).
Calculating Solipsistic swamping with the Self-Indication Assumption
In his estimations, Tarsney does not take any anthropic evidence into account, when calculating his credence in solipsism. By doing this, he implicitly assumes the Self-Sampling Assumption. Let us now look where it leads us if we assume the Self-Indication Assumption instead.
Let us say, we have a credence of cs in solipsism, before taking into account any anthropic evidence. If solipsism is true, there is only 1 person, and if solipsism is false, there are Np persons. From the observation, that we are indeed a person ourselves, this leads us to a credence of cs⋅1cs⋅1+(1−cs)⋅Np=11+Npcs−Np≈csNp in solipsism.
Let us now see, how Alice and Tom are choosing under these conditions between the selfish act and the altruistic act.
Ut(ego)=(1−11+Npcs−Np)⋅1+11+Npcs−Np⋅1Ut(alt)=(1−11+Npcs−Np)⋅(Ualt)+11+Npcs−Np⋅0Ut(alt)>Ut(ego)⇔Ualt>1+1Npcs1−cs
Tom the total utilitarian still chooses the altruistic action as long as Ualt is bigger than 1 and his initial credence in solipsism is not too high.
Ua(ego)=(1−11+Npcs−Np)⋅1Np+11+Npcs−Np⋅11Ua(alt)=(1−11+Npcs−Np)⋅UaltNp+11+Npcs−Np⋅01Ua(alt)>Ua(ego)⇔Ualt>1+cs1−cs
Alice the average utilitarian now also chooses the altruistic choices for reasonably small initial credences in Solipsism. Her choice does not even depend on the total number of persons in the universe. This means that finding alien civilizations or grass sentience does not impact Alice`s practical level of altruism, which is a nicely intuitive property.
Conclusion
Solipsistic swamping shows clearly that average utilitarians should behave egoistically under the Self-Sampling Assumption. If one takes the Self-Indication Assumption, on the other hand, this conclusion can be refuted. This is a solid rebuttal for any average utilitarians who do not want to adhere to a moral theory that preaches egoism. I find the elegance of the way the numbers work out in exactly such a way, that the total number of persons cancel out, astonishing. This reminds me of the elegant way the Self Indication assumption refutes the doomsday argument and might be seen as evidence for the Self-Indication Assumption.
Literature
Christian J. Tarsney (2020): ``Average Utilitarianism Implies Solipsistic Egoism″
Nick Bostrom, Milan M. Ćirković (2003): ``The Doomsday Argument and the Self-Indication Assumption: Reply to Olum″, The Philosophical Quarterly, Volume 53
Derek Parfit (1984): ``Reasons and Persons″, published: Oxford University Press
T. M. Hurka (1982): ``Average Utilitarianisms″Analysis, Vol. 42 , published: Oxford University Press