Since the heart of your case is “well we know what human experience is like so we can treat that as a fixed point”, I’m just going to point out various ways in which we don’t necessarily know what human experience is like, and some of the implications if we more narrowly try to anchor on what we know and otherwise adopt what I take to be your stance on the two-envelope problem:
We each only experience our own consciousness
It seems decently likely that humans vary in some dimension like degree- or intensity-of-consciousness
Generically, we won’t know if we’re above- or below-average on this
So in expectation, others’ experiences all matter more than our own
But in aggregate, a society of fully altruistic people would make errors if they each act on the assumption that their own experience matters less in expectation than other people’s
In the moment writing this, I don’t know what intense pain or intense pleasure feel like
I can only base my judgement of these things on memory
But memory, as we know in many contexts, could be faulty
Because there is more at stake in worlds where my memory is minimizing rather than exaggerating my past experiences, I should act on the assumption that my memory is systematically skewed in this way
It’s not unusual for people to lie to themselves about their own experiences
e.g. telling themselves things are fine while at some level experiencing significant psychological suffering
So we should assume that our top-level consciousness doesn’t always have full access to our morally relevant experience even in the moment of experiencing it
Our uncertainty should presumably include some worlds where a large majority of our morally relevant experience is opaque to us; so in expectation the moral weight we assign ourselves should be rather higher than the one which is experienced and hence “known”
We’re unable to tell how many times our experience is being instantiated
On accounts where that’s morally relevant, this could have a big impact on the expectation of the moral worth of our experiences
To be clear, I don’t endorse the conclusions here — but in each case my instinct is that I’m getting off the train by saying “seems like there’s some two-envelope type phenomenon going on here, so I’m not happy straightforwardly taking expectations”.
I basically agree with all of this, and make some similar points in my sections Multiple possible reference points and Conscious subsystems. I think there are still two envelopes problem between what we actually access, and we don’t have a nice way of uniquely fixing comparisons. But, I think it’s defensible to do everything human-relative or relative to your own experiences (which are human, so this is still human-relative), what’s accessed. You’ll need to use multiple reference points.
Since the heart of your case is “well we know what human experience is like so we can treat that as a fixed point”, I’m just going to point out various ways in which we don’t necessarily know what human experience is like, and some of the implications if we more narrowly try to anchor on what we know and otherwise adopt what I take to be your stance on the two-envelope problem:
We each only experience our own consciousness
It seems decently likely that humans vary in some dimension like degree- or intensity-of-consciousness
Generically, we won’t know if we’re above- or below-average on this
So in expectation, others’ experiences all matter more than our own
But in aggregate, a society of fully altruistic people would make errors if they each act on the assumption that their own experience matters less in expectation than other people’s
In the moment writing this, I don’t know what intense pain or intense pleasure feel like
I can only base my judgement of these things on memory
But memory, as we know in many contexts, could be faulty
Because there is more at stake in worlds where my memory is minimizing rather than exaggerating my past experiences, I should act on the assumption that my memory is systematically skewed in this way
It’s not unusual for people to lie to themselves about their own experiences
e.g. telling themselves things are fine while at some level experiencing significant psychological suffering
So we should assume that our top-level consciousness doesn’t always have full access to our morally relevant experience even in the moment of experiencing it
Our uncertainty should presumably include some worlds where a large majority of our morally relevant experience is opaque to us; so in expectation the moral weight we assign ourselves should be rather higher than the one which is experienced and hence “known”
We’re unable to tell how many times our experience is being instantiated
On accounts where that’s morally relevant, this could have a big impact on the expectation of the moral worth of our experiences
To be clear, I don’t endorse the conclusions here — but in each case my instinct is that I’m getting off the train by saying “seems like there’s some two-envelope type phenomenon going on here, so I’m not happy straightforwardly taking expectations”.
I basically agree with all of this, and make some similar points in my sections Multiple possible reference points and Conscious subsystems. I think there are still two envelopes problem between what we actually access, and we don’t have a nice way of uniquely fixing comparisons. But, I think it’s defensible to do everything human-relative or relative to your own experiences (which are human, so this is still human-relative), what’s accessed. You’ll need to use multiple reference points.