if the value of welfare scales something-like-linearly
I think this is a critically underappreciated crux! Even accepting the other parts, it’s far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn’t a billion times greater than simulating it once..
simulating the same wonderful experience a billion times certainly isn’t a billion times greater than simulating it once..
I disagree but I don’t think this is really a crux. The ideal future could involve filling the universe with beings who have extremely good experiences compared to humans (and do not resemble humans at all) but their experiences are still very diverse.
And, this is sort of an unanswered question about how qualia work, but my guess is that for combinatoric reasons, you could fill the accessible universe with (say) 10^40 beings who all have different experiences where the worst experience out of all of them is only a bit worse than the best.
That’s a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
I meant that I don’t think it’s obvious that most people in EA working on this would agree.
I do think it’s obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It’s even very unclear how to count person-experiences overall, as Johnston’s Personite paper argues: https://www.jstor.org/stable/26631215 and I’ll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
I recently discussed this on twitter with @Jessica_Taylor, and think that there’s a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n—which is a completely different claim!
I think the many worlds interpretation confuses this by making it about causally separated beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that’s explicitly ignoring the question. (As a reducto, if we asked “Is 1 the same as 1” the answer is yes, they are identical platonic numbers, but if we instead ask “is 1 the same as 1 plus 1″ the answer is no, they are different because the second is… different, by assumption!)
I need to write a far longer response to that paper, but I’ll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.
I think this is a critically underappreciated crux! Even accepting the other parts, it’s far from obvious that the intuitive approach of scaling value linearly in the near-term and locally is indefinitely correct far out-of-distribution; simulating the same wonderful experience a billion times certainly isn’t a billion times greater than simulating it once..
I disagree but I don’t think this is really a crux. The ideal future could involve filling the universe with beings who have extremely good experiences compared to humans (and do not resemble humans at all) but their experiences are still very diverse.
And, this is sort of an unanswered question about how qualia work, but my guess is that for combinatoric reasons, you could fill the accessible universe with (say) 10^40 beings who all have different experiences where the worst experience out of all of them is only a bit worse than the best.
That’s a fair point, and I agree that it leads to a very different universe.
At that point, however, (assuming we embrace moral realism and an absolute moral value of some non-subjective definition of qualia, which seems incoherent,) it also seems to lead to a functionally unsolvable coordination problem for maximization across galaxies.
My sense is that most people in EA working on these topics disagree.
I don’t think that’s at all obvious, though it could be true.
I agree with you, as do most people outside of EA, but I believe almost everyone in EA working on these topics disagrees
I meant that I don’t think it’s obvious that most people in EA working on this would agree.
I do think it’s obvious that most people overall would agree, though most would not agree or be unsure that a simulation matters at all. It’s even very unclear how to count person-experiences overall, as Johnston’s Personite paper argues: https://www.jstor.org/stable/26631215 and I’ll also point to the general double-counting problem: https://link.springer.com/article/10.1007/s11098-020-01428-9 and suggest that it could apply.
Interesting. Could you point to anyone in EA who does not agree with the additive view and works in this field?
It sounds like MichaelDickens’ reply is probably right, that we don’t need to consider identical experiences in order for this argument to go through.
But the question of whether identical copies of the same experience have any additional value is a really interesting one. I used to feel very confident that they have no value at all. I’m now a lot more uncertain, after realising that this view seems to be in tension with the many worlds interpretation of quantum mechanics: https://www.lesswrong.com/posts/bzSfwMmuexfyrGR6o/the-ethics-of-copying-conscious-states-and-the-many-worlds
I recently discussed this on twitter with @Jessica_Taylor, and think that there’s a weird claim involved that collapses into either believing that distance changes moral importance, or that thicker wires in a computer increases its moral weight. (Similar to the cutting dominos in half example in that post, or the thicker pencil, but less contrived.) Alternatively, it confuses the question by claiming that identical beings at time t_0 are morally different because they differ at time t_n—which is a completely different claim!
I think the many worlds interpretation confuses this by making it about causally separated beings which are either, in my view, only a single being, or are different because they will diverge. And yes, different beings are obviously counted more than once, but that’s explicitly ignoring the question. (As a reducto, if we asked “Is 1 the same as 1” the answer is yes, they are identical platonic numbers, but if we instead ask “is 1 the same as 1 plus 1″ the answer is no, they are different because the second is… different, by assumption!)
I think there are pretty good reasons to expect any reasonable axiology to be additive.
I need to write a far longer response to that paper, but I’ll briefly respond (and flag to @Christian Tarsney) that I think my biggest crux is that I think they picked weak objections to causal domain restriction, and that far better objections apply. Secondarily, for axiological weights, the response about egalitarian views leading to rejection of different axiological weights seems to be begging the question, and the next part ignores the fact that any acceptable response to causal domain restriction also addresses the issue of large background populations.