I myself reject hedonism and accept a broader view of welfare (given that we care about a broad range of things beyond happiness, e.g. life/âfreedom/âachievement/âlove/âwhatever).
Hedonism is compatible with caring about âlife/âfreedom/âachievement/âlove/âwhateverâ, because all of those describe sets of conscious experiences, and hedonism is about valuing conscious experiences. I cannot think of something I value independently of conscious experiences, but I would welcome counterexamples.
Thereâs the standard philosophical counterexample of the experience machine, including the reformulated Joshua Greene example that addressed status quo bias. But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)? Almost certainly, and most people will give this answer, youâll want the world to be real. Thatâs because we donât just want to think that youâre free/âsuccessful/âin a loving relationshipâwe also actually want to be all those things.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different). And that would also be why people care about having non-moral impact even after theyâre dead (e.g. authors hoping their posthumously published book is successful, or some athlete wanting their achievements to stand the test of time and not being bested at the next competition, or some mathematician wanting to prove some conjecture and not just think he did).
But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)?
It depends on the specific properties of the real and simulated world, but my answer would certainly be guided by hedonic considerations:
My personal hedonic utility would be the same in the simulated and real worlds, so it would not be a deciding factor.
If I were the only (sentient) being in the simulated world, and there were lots of (sentient) beings in the real world, the absolute value of the total hedonic utility would be much larger for the real world.
As a result, I would prefer:
The real world if I expected the mean experience per being there to be positive (i.e. positive total hedonic utility).
The simulated world if I expected the mean experience per being in the real world to be negative (i.e. negative total hedonic utility), and I had positive experiences myself in the simulated world.
Hedonism says all that matters is conscious experiences, but that does mean we should be indifferent between 2 worlds where our personal concious experiences are the same. We still have to look into the experiences of other beings, unless we are perfectly egoistic, which I do not think we should be.
For me, a true counterexample to hedonism would have to present 2 worlds in which expected total (not personal) hedonistic utility (ETHU) were the same, and people still preferred one of them over the other. However, since we do not understand well how to calculate ETHU, we can only ensure 2 worlds have the same of it if they are exactly the same, in which case it does not make sense to prefer one over the other.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different).
I agree. However, as I commented here, that is only an argument against egoistic hedonism, not altruistic hedonism (which is the one I support).
You can imagine a) everyone in their own experience machine isolated from everyone else, so that all the other âpeopleâ inside are not conscious (but the people believe the others are conscious, and thereâs no risk theyâll find out they arenât), or b) people genuinely interacting with each other (in the real world, or virtual reality), making real connections with other real people. I think most people would prefer the latter for themselves, even if it makes them somewhat worse off. An impartial hedonistic view would recommend disregarding these preferences and putting everyone in the isolated experience machines anyway.
Not related to your point, but I would like to note it seems quite extreme to reject the application of hedonism in the context of welfare range estimates based on such thought experiment.
It is unclear to me whether ETHU is greater in a) or b). It depends on whether it is more efficient to produce it via experience machines or genuine interactions (I suppose utility per being would be higher with experience machines, but maybe not utility per unit resources). So I do not think people preferring a) or b) is good evidence that there is something else which matters besides ETHU.
It does not seem possible to make a hard distinction between a) and b). I am only able to perceive reality via my own conscious experience, so there is a sense in which my body is in fact an experience machine.
I believe most people preferring b) over a) is very weak evidence that b) is better than a). Our intuitions are biased towards assessing the thought experiment based on how the words used to describe it make us feel. As a 1st approximation, I think people would be thinking about whether âgenuineâ and ârealâ sound better than âmachineâ and âisolatedâ, and they do, so I am not surprised most people prefer b).
Being genuinely loved rather than just believing you are loved could matter to your welfare even if it doesnât affect your conscious experiences. Knowing the truth even of it makes no difference to your experiences. Actually achieving something rather than falsely believing you achieved it.
I would say they work as counterexamples to egoistic hedonism, but not to altruistic hedonism (the one I support). In each pair of situations you described, my mental states (and therefore personal hedonic utility) would be the same, but the experiences of others around me would be quite different (and so would total hedonic utility):
Pretending to love should feel quite different from loving, and being fake generally leads to worse outcomes.
One is better positioned to improve the mental states of others if one knows what is true.
Actually achieving something means actually improving the mental states of others (to the extent one is altruistic), rather than only believing one did so.
For these reasons, rejecting wireheading is also compatible with hedonism. A priori, it does not seem like the best way to help others. One can specify in thought experiments that âeveryone else[âs hedonic utility] is taken care ofâ, but I think it is quite hard to conditional human answers on that, given that lots of our experiences go against the idea that having delusional experiences is both optimal for us and others.
Hi Joel,
Hedonism is compatible with caring about âlife/âfreedom/âachievement/âlove/âwhateverâ, because all of those describe sets of conscious experiences, and hedonism is about valuing conscious experiences. I cannot think of something I value independently of conscious experiences, but I would welcome counterexamples.
Thereâs the standard philosophical counterexample of the experience machine, including the reformulated Joshua Greene example that addressed status quo bias. But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)? Almost certainly, and most people will give this answer, youâll want the world to be real. Thatâs because we donât just want to think that youâre free/âsuccessful/âin a loving relationshipâwe also actually want to be all those things.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different). And that would also be why people care about having non-moral impact even after theyâre dead (e.g. authors hoping their posthumously published book is successful, or some athlete wanting their achievements to stand the test of time and not being bested at the next competition, or some mathematician wanting to prove some conjecture and not just think he did).
Thanks for the reply, Joel!
It depends on the specific properties of the real and simulated world, but my answer would certainly be guided by hedonic considerations:
My personal hedonic utility would be the same in the simulated and real worlds, so it would not be a deciding factor.
If I were the only (sentient) being in the simulated world, and there were lots of (sentient) beings in the real world, the absolute value of the total hedonic utility would be much larger for the real world.
As a result, I would prefer:
The real world if I expected the mean experience per being there to be positive (i.e. positive total hedonic utility).
The simulated world if I expected the mean experience per being in the real world to be negative (i.e. negative total hedonic utility), and I had positive experiences myself in the simulated world.
Hedonism says all that matters is conscious experiences, but that does mean we should be indifferent between 2 worlds where our personal concious experiences are the same. We still have to look into the experiences of other beings, unless we are perfectly egoistic, which I do not think we should be.
For me, a true counterexample to hedonism would have to present 2 worlds in which expected total (not personal) hedonistic utility (ETHU) were the same, and people still preferred one of them over the other. However, since we do not understand well how to calculate ETHU, we can only ensure 2 worlds have the same of it if they are exactly the same, in which case it does not make sense to prefer one over the other.
I agree. However, as I commented here, that is only an argument against egoistic hedonism, not altruistic hedonism (which is the one I support).
You can imagine a) everyone in their own experience machine isolated from everyone else, so that all the other âpeopleâ inside are not conscious (but the people believe the others are conscious, and thereâs no risk theyâll find out they arenât), or b) people genuinely interacting with each other (in the real world, or virtual reality), making real connections with other real people. I think most people would prefer the latter for themselves, even if it makes them somewhat worse off. An impartial hedonistic view would recommend disregarding these preferences and putting everyone in the isolated experience machines anyway.
Thanks for the clarification! Some thoughts:
Not related to your point, but I would like to note it seems quite extreme to reject the application of hedonism in the context of welfare range estimates based on such thought experiment.
It is unclear to me whether ETHU is greater in a) or b). It depends on whether it is more efficient to produce it via experience machines or genuine interactions (I suppose utility per being would be higher with experience machines, but maybe not utility per unit resources). So I do not think people preferring a) or b) is good evidence that there is something else which matters besides ETHU.
It does not seem possible to make a hard distinction between a) and b). I am only able to perceive reality via my own conscious experience, so there is a sense in which my body is in fact an experience machine.
I believe most people preferring b) over a) is very weak evidence that b) is better than a). Our intuitions are biased towards assessing the thought experiment based on how the words used to describe it make us feel. As a 1st approximation, I think people would be thinking about whether âgenuineâ and ârealâ sound better than âmachineâ and âisolatedâ, and they do, so I am not surprised most people prefer b).
Being genuinely loved rather than just believing you are loved could matter to your welfare even if it doesnât affect your conscious experiences. Knowing the truth even of it makes no difference to your experiences. Actually achieving something rather than falsely believing you achieved it.
Thanks for the examples, Michael!
I would say they work as counterexamples to egoistic hedonism, but not to altruistic hedonism (the one I support). In each pair of situations you described, my mental states (and therefore personal hedonic utility) would be the same, but the experiences of others around me would be quite different (and so would total hedonic utility):
Pretending to love should feel quite different from loving, and being fake generally leads to worse outcomes.
One is better positioned to improve the mental states of others if one knows what is true.
Actually achieving something means actually improving the mental states of others (to the extent one is altruistic), rather than only believing one did so.
For these reasons, rejecting wireheading is also compatible with hedonism. A priori, it does not seem like the best way to help others. One can specify in thought experiments that âeveryone else[âs hedonic utility] is taken care ofâ, but I think it is quite hard to conditional human answers on that, given that lots of our experiences go against the idea that having delusional experiences is both optimal for us and others.