I know that this is a necro but I just wanted to point out that the problem still arises as long as you have any non-trivial credence in your actions having infinite consequences. For infinite consequences always dominate finite ones as long as the former have any probability above 0.
One’s actions leading to infinite factual value does not mean they lead to infinite Shapley value, which is what one arguably should care about. If N agents are in a position to achieve a factual value V, the Shapley value of each agent is V/​N. This naively suggests the Shapley value goes to infinity as V goes to infinity. However, I think we should assume the value one can achieve in the world is proportional to the number of agents in it (V = k N). So, in this toy model, the Shapley value will be constant (equal to k), not depending on the number of agents.
In other words, if one can cause infinite value, one’s actions can be infinitely important. However, this suggests the existence of infinitely many agents, i.e. null neglectedness. So the infinite importance is cancelled out by the null neglectedness, and therefore I would say the cost-effectiveness does not change.
Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to end in a Big Bounce scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other cyclic cosmologies, which do theoretically allow for literally infinite (morally-relevant!) effects from individual actions, even if they’re in the minority.
I do not think a Big Bounce scenario would imply infinite effects. It would imply a non-zero chance of arbitrarily large effects, but that is quite different from infinite effects. Values which tend to infinity can be compared, and therefore would not cause havoc to ethics. In contrast, infinities cause lots of problems.
Hi there,
One’s actions leading to infinite factual value does not mean they lead to infinite Shapley value, which is what one arguably should care about. If N agents are in a position to achieve a factual value V, the Shapley value of each agent is V/​N. This naively suggests the Shapley value goes to infinity as V goes to infinity. However, I think we should assume the value one can achieve in the world is proportional to the number of agents in it (V = k N). So, in this toy model, the Shapley value will be constant (equal to k), not depending on the number of agents.
In other words, if one can cause infinite value, one’s actions can be infinitely important. However, this suggests the existence of infinitely many agents, i.e. null neglectedness. So the infinite importance is cancelled out by the null neglectedness, and therefore I would say the cost-effectiveness does not change.
I do not think a Big Bounce scenario would imply infinite effects. It would imply a non-zero chance of arbitrarily large effects, but that is quite different from infinite effects. Values which tend to infinity can be compared, and therefore would not cause havoc to ethics. In contrast, infinities cause lots of problems.