Does this view imply that it is actually not possible to have a world where e.g. a machine creates one immortal happy person per day, forever, who then form an ever-growing line?
How does this view interpret cosmological hypotheses on which the universe is infinite? Is the claim that actually, on those hypotheses, the universe is finite after all?
It seems like lots of the (countable) worlds and cases discussed in the post can simply be reframed as never-ending processes, no? And then similar (identical?) questions will arise? Thus, for example, w5 is equivalent to a machine that creates a1 at −1, then a3 at −1, then a5 at −1, etc. w6 is equivalent to a machine that creates a1 at −1, then a2 at −1, a3 at −1, etc. What would this view say about which of these machines we should create, given the opportunity? How should we compare these to a w8 machine that creates b1 at −1, b2 at −1, b3 at −1, b4 at −1, etc?
Re: the Jaynes quote: I’m not sure I’ve understood the full picture here, but in general, to me it doesn’t feel like the central issues here have to do with dependencies on “how the limit is approached,” such that requiring that each scenario pin down an “order” solves the problems. For example, I think that a lot of what seems strange about Neutrality-violations in these cases is that even if we pin down an order for each case, the fact that you can re-arrange one into the other makes it seem like they ought to be ethically equivalent. Maybe we deny that, and maybe we do so for reasons related to what you’re talking about—but it seems like the same bullet.
My take (think I am less of an expert than djbinder here)
This view allows that.
This view allows that. (Although entirely separately consideration of entropy etc would not allow infinite value.)
No I don’t think identical questions arise. Not sure. Skimming the above post it seems to solve most of the problematic examples you give. At any point a moral agent will exist in a universe with finite space and finite time that will tend infinite going forward. So you cannot have infinite starting points so no zones of suffering etc. Also I think you don’t get problems with “welfare-preserving bijections” when well defined it time but struggle to explain why. It seems that for example w1 below is less bad than w2
I think what is true is probably something like “neverending process don’t exist, but arbitrarily long ones do”, but I’m not confident. My more general claim is that there can be intermediate positions between ultrafinitism (“there is a biggest number”), and any laissez faire “anything goes” attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.
As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.
For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor γ. Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take γ→0 and thus derive a finite answer.
There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn’t matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don’t matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.
This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the “ultrafinitist” and the “anything goes”:
The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.
Maybe infinite situations like the one you described are allowed, but require some “equivalence class of regularizations” to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you’d given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.
As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that “small rearrangments” (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as γ→0. But “big rearrangments” can cause differences that grow with γ. Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the “small rearrangments” manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that “infinity” is mapped to a finite point. “Small rearrangments” leave infinity unchanged, whereas “large” ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)
A few questions about this:
Does this view imply that it is actually not possible to have a world where e.g. a machine creates one immortal happy person per day, forever, who then form an ever-growing line?
How does this view interpret cosmological hypotheses on which the universe is infinite? Is the claim that actually, on those hypotheses, the universe is finite after all?
It seems like lots of the (countable) worlds and cases discussed in the post can simply be reframed as never-ending processes, no? And then similar (identical?) questions will arise? Thus, for example, w5 is equivalent to a machine that creates a1 at −1, then a3 at −1, then a5 at −1, etc. w6 is equivalent to a machine that creates a1 at −1, then a2 at −1, a3 at −1, etc. What would this view say about which of these machines we should create, given the opportunity? How should we compare these to a w8 machine that creates b1 at −1, b2 at −1, b3 at −1, b4 at −1, etc?
Re: the Jaynes quote: I’m not sure I’ve understood the full picture here, but in general, to me it doesn’t feel like the central issues here have to do with dependencies on “how the limit is approached,” such that requiring that each scenario pin down an “order” solves the problems. For example, I think that a lot of what seems strange about Neutrality-violations in these cases is that even if we pin down an order for each case, the fact that you can re-arrange one into the other makes it seem like they ought to be ethically equivalent. Maybe we deny that, and maybe we do so for reasons related to what you’re talking about—but it seems like the same bullet.
My take (think I am less of an expert than djbinder here)
This view allows that.
This view allows that. (Although entirely separately consideration of entropy etc would not allow infinite value.)
No I don’t think identical questions arise. Not sure. Skimming the above post it seems to solve most of the problematic examples you give. At any point a moral agent will exist in a universe with finite space and finite time that will tend infinite going forward. So you cannot have infinite starting points so no zones of suffering etc. Also I think you don’t get problems with “welfare-preserving bijections” when well defined it time but struggle to explain why. It seems that for example w1 below is less bad than w2
Time t1 t2 t3 t4 t5 t6 t7
Agent a1 a2 a3 a4 a5 a6 a7
w1 −1 −1 −1 −1….
w2 −1 −1 −1 −1 −1 −1 −1….
I think what is true is probably something like “neverending process don’t exist, but arbitrarily long ones do”, but I’m not confident. My more general claim is that there can be intermediate positions between ultrafinitism (“there is a biggest number”), and any laissez faire “anything goes” attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.
As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.
For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor γ. Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take γ→0 and thus derive a finite answer.
There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn’t matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don’t matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.
This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the “ultrafinitist” and the “anything goes”:
The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.
Maybe infinite situations like the one you described are allowed, but require some “equivalence class of regularizations” to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you’d given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.
As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that “small rearrangments” (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as γ→0. But “big rearrangments” can cause differences that grow with γ. Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the “small rearrangments” manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that “infinity” is mapped to a finite point. “Small rearrangments” leave infinity unchanged, whereas “large” ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)