I think it’s plausible for symmetric utilitarian views lexically sensitive to the differences between different infinite cardinals of value that reducing extinction risk is among the best ways of achieving cardinally larger infinities of value, since it buys us more time to do so, and plausibly it will be worked on anyway if we don’t go extinct.
However, with a major value lock-in event on its way, e.g. AGI or space colonization, increasing the likelihood of and amount of work with which these larger infinities are pursued in the future seems at least as important as reducing extinction risk, since the default amount of resources for it seems low to me, given how neglected it is.
I’d expect that doubling the expected amount of resources used by our descendants to generate higher infinities conditional on non-extinction is about as good as halving extinction risk, and the former is far far more neglected, so easier to achieve.
For fanatical suffering-focused views, preventing such higher infinities would instead be a top priority.
If you aggregate before taking differences, conditional on the universe/multiverse already being infinite, larger cardinalities of (dis)utilities should already be pursued with high probability, and without a way to distinguish between different outcomes with the same cardinal number of value-bearers of the same sign, it seems like the only option that makes any difference to the aggregate utility in expectation is aiming to ensure that for a given cardinal, there are fewer than that many utilities that are negative. But I’m not sure even this makes a difference. If you take expectations over the size of the universe before taking differences, the infinities dominate anyway, so you can ignore the possibility of a finite universe.
If you’re instead sensitive to the difference you make (i.e. you estimate differences before aggregating, either over individuals or the probability), then pursuing or preventing larger infinities matters again, and quality improvements may matter, too. Increasing or decreasing the probability of the universe/multiverse being infinite at all could still look valuable.
Is there any plausible path to producing ℵ2 (or even ℵ1) amounts of value with the standard metaphysical picture of the world we have? Or are you thinking that we may discover that it is possible and so should aim to position ourselves to make that discovery?
Affecting ℵ1 (and ℵ2, assuming the continuum hypothesis is false, i.e. ℵ2≤|R|) utilities seems possible in a continuous spacetime universe with continuous quantum branching but counting and aggregating value discretely, indexing and distinguishing moral patients by branches (among other characteristics), of which there are |R|. I think continuity of the universe is still consistent with current physics, and the Planck scale is apparently not the lowest we can probe in particular (here’s a paper making this claim in its abstract and background in section 1; you can ignore the rest of the paper). Of course, a discrete universe is also still consistent with current physics, and conscious experiences and other things that matter are only practically distinguishable discretely, anyway.
I mostly have in mind trying to influence the probability (your subjective probability) that there will be ℵα moral patients at all under discrete counting or enough of their utilities that an aggregate you use, if any, will be different, and I don’t see any particular plausible paths to achieving this with the (or a) standard picture, but I am thinking “we may discover that it is possible and so should aim to position ourselves to make that discovery” and use it. I don’t have any particular ideas for affecting strictly more than |R| moral patients without moving away from the standard picture, either.
I think it’s plausible for symmetric utilitarian views lexically sensitive to the differences between different infinite cardinals of value that reducing extinction risk is among the best ways of achieving cardinally larger infinities of value, since it buys us more time to do so, and plausibly it will be worked on anyway if we don’t go extinct.
However, with a major value lock-in event on its way, e.g. AGI or space colonization, increasing the likelihood of and amount of work with which these larger infinities are pursued in the future seems at least as important as reducing extinction risk, since the default amount of resources for it seems low to me, given how neglected it is.
I’d expect that doubling the expected amount of resources used by our descendants to generate higher infinities conditional on non-extinction is about as good as halving extinction risk, and the former is far far more neglected, so easier to achieve.
For fanatical suffering-focused views, preventing such higher infinities would instead be a top priority.
If you aggregate before taking differences, conditional on the universe/multiverse already being infinite, larger cardinalities of (dis)utilities should already be pursued with high probability, and without a way to distinguish between different outcomes with the same cardinal number of value-bearers of the same sign, it seems like the only option that makes any difference to the aggregate utility in expectation is aiming to ensure that for a given cardinal, there are fewer than that many utilities that are negative. But I’m not sure even this makes a difference. If you take expectations over the size of the universe before taking differences, the infinities dominate anyway, so you can ignore the possibility of a finite universe.
If you’re instead sensitive to the difference you make (i.e. you estimate differences before aggregating, either over individuals or the probability), then pursuing or preventing larger infinities matters again, and quality improvements may matter, too. Increasing or decreasing the probability of the universe/multiverse being infinite at all could still look valuable.
Is there any plausible path to producing ℵ2 (or even ℵ1) amounts of value with the standard metaphysical picture of the world we have? Or are you thinking that we may discover that it is possible and so should aim to position ourselves to make that discovery?
Affecting ℵ1 (and ℵ2, assuming the continuum hypothesis is false, i.e. ℵ2≤|R|) utilities seems possible in a continuous spacetime universe with continuous quantum branching but counting and aggregating value discretely, indexing and distinguishing moral patients by branches (among other characteristics), of which there are |R|. I think continuity of the universe is still consistent with current physics, and the Planck scale is apparently not the lowest we can probe in particular (here’s a paper making this claim in its abstract and background in section 1; you can ignore the rest of the paper). Of course, a discrete universe is also still consistent with current physics, and conscious experiences and other things that matter are only practically distinguishable discretely, anyway.
I mostly have in mind trying to influence the probability (your subjective probability) that there will be ℵα moral patients at all under discrete counting or enough of their utilities that an aggregate you use, if any, will be different, and I don’t see any particular plausible paths to achieving this with the (or a) standard picture, but I am thinking “we may discover that it is possible and so should aim to position ourselves to make that discovery” and use it. I don’t have any particular ideas for affecting strictly more than |R| moral patients without moving away from the standard picture, either.