Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.
R.e. > For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
I just don’t trust this intuition very much. I think there is a lot of anxiety around experience machines due to: - Fear of being locked in (choosing to be in the machine permanently) - Fear that you will no longer be able to tell what’s real
And to be clear, I share the intuition that experience machines seem bad, and yet I’m often totally content to play video games all day long because it doesn’t violate those two conditions.
So what I’m roughly arguing is: We have some good reasons to be wary of experience machines, but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
I agree that some people don’t seem to give hedonism a fair hearing when discussing experience machine thought experiments. But also, I think that some people have genuine reservations that make sense given their life goals.
Personally, I very much see the appeal of experience machines. Under the right circumstances, I’d be thrilled to enter! If I was single and my effective altruist goals were taken care of, I would leave my friends and family behind for a solipsistic experience machine. (I think I do care about having authentic relationships with friends and family to some degree, but definitely not enough!) I’d also enter a non-solipsistic experience machine if my girlfriend wanted to join and we’d continue to have authentic interactions (even if that opens up the possibility of having negative experiences). The reason I wouldn’t want to enter under default circumstances is because the machine would replace the person I love with a virtual person (this holds even if my girlfriend got her own experience machine, and everyone else on the planet too for that matter). I know I wouldn’t necessarily be aware of the difference and that things with a virtual girlfriend (or girlfriends?) could be incredibly good. Still, entering this solipsistic experience machine would go against the idea of loving someone for the person they are (instead of how they make me feel).
I wrote more experience machine thought experiments here.
but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
I don’t think there’s such a thing as “the ethical value of a life,” at least not in a well-defined objective sense. (There are clearly instances where people’s lives aren’t worth living and instances where it would be a tragedy to end someone’s life against their will, so when I say the concept “isn’t objective,” I’m not saying that there’s nothing we can say about the matter. I just mean that it’s defensible for different people to emphasize different aspects of “the value of a life.” [Especially when we’re considering different contexts such as the value of an existing or sure-to-exist person vs. the value of newly creating a person that is merely a possible person at the time we face the decision.])
Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.
R.e.
> For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
I just don’t trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what’s real
And to be clear, I share the intuition that experience machines seem bad, and yet I’m often totally content to play video games all day long because it doesn’t violate those two conditions.
So what I’m roughly arguing is: We have some good reasons to be wary of experience machines, but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
I agree that some people don’t seem to give hedonism a fair hearing when discussing experience machine thought experiments. But also, I think that some people have genuine reservations that make sense given their life goals.
Personally, I very much see the appeal of experience machines. Under the right circumstances, I’d be thrilled to enter! If I was single and my effective altruist goals were taken care of, I would leave my friends and family behind for a solipsistic experience machine. (I think I do care about having authentic relationships with friends and family to some degree, but definitely not enough!) I’d also enter a non-solipsistic experience machine if my girlfriend wanted to join and we’d continue to have authentic interactions (even if that opens up the possibility of having negative experiences). The reason I wouldn’t want to enter under default circumstances is because the machine would replace the person I love with a virtual person (this holds even if my girlfriend got her own experience machine, and everyone else on the planet too for that matter). I know I wouldn’t necessarily be aware of the difference and that things with a virtual girlfriend (or girlfriends?) could be incredibly good. Still, entering this solipsistic experience machine would go against the idea of loving someone for the person they are (instead of how they make me feel).
I wrote more experience machine thought experiments here.
I don’t think there’s such a thing as “the ethical value of a life,” at least not in a well-defined objective sense. (There are clearly instances where people’s lives aren’t worth living and instances where it would be a tragedy to end someone’s life against their will, so when I say the concept “isn’t objective,” I’m not saying that there’s nothing we can say about the matter. I just mean that it’s defensible for different people to emphasize different aspects of “the value of a life.” [Especially when we’re considering different contexts such as the value of an existing or sure-to-exist person vs. the value of newly creating a person that is merely a possible person at the time we face the decision.])