If you maximize expcted value, you should be taking expected values through small probabilities, including that we have the physics wrong or that things could go on forever (or without hard upper bound) temporally. Unless you can be 100% in no infinities, then your expected values will be infinite or undefined. And there are, I think, hypotheses that can’t be ruled out and that
could involve infinite affectable value.
In response to Carl Shulman on acausal influence, David Manheim said to renormalize. I’m sympathetic and would probably agree with doing something similar, but the devil is in the details. There may be no very uniquely principled way to do this, and some things can still break down, e.g. you get actions that are morally incomparable.
And there are, I think, hypotheses that can’t be ruled out and that could involve infinite affectable value.
This is my crux, I think. I have yet to find a single persuasive example of an ethical decision I might face for which incorporating infinite ethics considerations suggests a different course of action. I don’t remember if Carlsmith’s essay provided any such examples; if it did I likely did not find them persuasive, since I skimmed it with this focus in mind. I interpreted Manheim & Sandberg’s paper to say that I likely wouldn’t find any such examples if I kept looking.
You could want to do acausal trades and cooperate with agents causally disconnected from you. You’ll expect that those who reason (sufficiently) similarly would do the same in return, and that you would cooperate would be evidence for them cooperating and make it more likely.
If you were difference-making risk averse locally, e.g. you don’t care about making a huge difference with very very tiny probability, by taking acausal influence into account, you should be (possibly much) less difference-making risk averse, according to Wilkinson.
I don’t see why acausal trade makes infinite ethics decision-relevant for essentially the reasons Manheim & Sandberg discuss in Section 4.5 – acausal trade alone doesn’t imply infinite value; footnote 41′s “In mainstream cosmological theories, there is a single universe, and the extent can be large but finite even when considering the unreachable portion (e.g. in closed topologies). In that case, these alternative decision theories are useful for interaction with unreachable beings, or as ways to interact with powerful predictors, but still do not lead to infinities”; physical limits on information storage and computation would still apply to any acausal coordination.
They aren’t asserting that the whole universe, including the unreachable portion, is finite in extent with certainty. They’re just saying that it’s possible, and they also note infinite is possible too in the sentence after which that footnote follows.
Even if you think a universe with infinite spatial extent is very unlikely, you should still be entertaining the possibility. If there’s a chance it’s infinite and you can have infinite impact (before renormalizing), a risk neutral expected value reasoner should wager on that.
FWIW, I’m sympathetic to their arguments in that section against expected value maximization, or that at least undermine the arguments for it. I’m not totally convinced of expected value maximization myself.
However, that doesn’t give a positive case for ignoring these infinities. I find infinite acausal impacts not too unlikely, personally, because both that acausal influence is possible seems more likely than not and that the universe is infinite in spatial extent (and in the right way to be influenced infinitely acausally) seems not too unlikely.
If you maximize expcted value, you should be taking expected values through small probabilities, including that we have the physics wrong or that things could go on forever (or without hard upper bound) temporally. Unless you can be 100% in no infinities, then your expected values will be infinite or undefined. And there are, I think, hypotheses that can’t be ruled out and that could involve infinite affectable value.
In response to Carl Shulman on acausal influence, David Manheim said to renormalize. I’m sympathetic and would probably agree with doing something similar, but the devil is in the details. There may be no very uniquely principled way to do this, and some things can still break down, e.g. you get actions that are morally incomparable.
This is my crux, I think. I have yet to find a single persuasive example of an ethical decision I might face for which incorporating infinite ethics considerations suggests a different course of action. I don’t remember if Carlsmith’s essay provided any such examples; if it did I likely did not find them persuasive, since I skimmed it with this focus in mind. I interpreted Manheim & Sandberg’s paper to say that I likely wouldn’t find any such examples if I kept looking.
You could want to do acausal trades and cooperate with agents causally disconnected from you. You’ll expect that those who reason (sufficiently) similarly would do the same in return, and that you would cooperate would be evidence for them cooperating and make it more likely.
If you were difference-making risk averse locally, e.g. you don’t care about making a huge difference with very very tiny probability, by taking acausal influence into account, you should be (possibly much) less difference-making risk averse, according to Wilkinson.
I don’t see why acausal trade makes infinite ethics decision-relevant for essentially the reasons Manheim & Sandberg discuss in Section 4.5 – acausal trade alone doesn’t imply infinite value; footnote 41′s “In mainstream cosmological theories, there is a single universe, and the extent can be large but finite even when considering the unreachable portion (e.g. in closed topologies). In that case, these alternative decision theories are useful for interaction with unreachable beings, or as ways to interact with powerful predictors, but still do not lead to infinities”; physical limits on information storage and computation would still apply to any acausal coordination.
I’ll look into Wilkinson’s paper, thanks.
They aren’t asserting that the whole universe, including the unreachable portion, is finite in extent with certainty. They’re just saying that it’s possible, and they also note infinite is possible too in the sentence after which that footnote follows.
Even if you think a universe with infinite spatial extent is very unlikely, you should still be entertaining the possibility. If there’s a chance it’s infinite and you can have infinite impact (before renormalizing), a risk neutral expected value reasoner should wager on that.
FWIW, I’m sympathetic to their arguments in that section against expected value maximization, or that at least undermine the arguments for it. I’m not totally convinced of expected value maximization myself.
However, that doesn’t give a positive case for ignoring these infinities. I find infinite acausal impacts not too unlikely, personally, because both that acausal influence is possible seems more likely than not and that the universe is infinite in spatial extent (and in the right way to be influenced infinitely acausally) seems not too unlikely.
But I am optimistic about renormalization.