The examples in the post have expected utilities assigned using inconsistent methodologies. If it’s possible to have long-run effects on future generations, then many actions will have such effects (elections can cause human extinction sometimes, an additional person saved from malaria could go on to cause or prevent extinction). If ludicrously vast universes and influence over them are subjectively possible, then we should likewise consider being less likely to get ludicrous returns if we are extinct or badly-governed (see ‘empirical stabilization assumptions’ in Nick Bostrom’s infinite ethics paper). We might have infinite impact (under certain decision theories) when we make a decision to eat a sandwich if there are infinite physically identical beings in the universe who will make the same decision as us.
Any argument of the form “consider type of consequence X, which is larger than consequences you had previously considered, as it applies to option A” calls for application of X to analyzing other options. When you do that you don’t get any 10^100 differences in expected utility of this sort, without an overwhelming amount of evidence to indicate that A has 10^100+ times the impact on X as option B or C (or your prior over other and unknown alternatives you may find later).
I believe this concern is addressed by the next post in the series. The current examples implicitly only consider two possible outcomes: “No effect” and “You do blah blah blah and this saves precisely X lives...” The next post expands the model to include arbitrarily many possible outcomes of each action under consideration, and after doing so ends up reasoning in much the way you describe to defuse the initial worry.
The examples in the post have expected utilities assigned using inconsistent methodologies. If it’s possible to have long-run effects on future generations, then many actions will have such effects (elections can cause human extinction sometimes, an additional person saved from malaria could go on to cause or prevent extinction). If ludicrously vast universes and influence over them are subjectively possible, then we should likewise consider being less likely to get ludicrous returns if we are extinct or badly-governed (see ‘empirical stabilization assumptions’ in Nick Bostrom’s infinite ethics paper). We might have infinite impact (under certain decision theories) when we make a decision to eat a sandwich if there are infinite physically identical beings in the universe who will make the same decision as us.
Any argument of the form “consider type of consequence X, which is larger than consequences you had previously considered, as it applies to option A” calls for application of X to analyzing other options. When you do that you don’t get any 10^100 differences in expected utility of this sort, without an overwhelming amount of evidence to indicate that A has 10^100+ times the impact on X as option B or C (or your prior over other and unknown alternatives you may find later).
I believe this concern is addressed by the next post in the series. The current examples implicitly only consider two possible outcomes: “No effect” and “You do blah blah blah and this saves precisely X lives...” The next post expands the model to include arbitrarily many possible outcomes of each action under consideration, and after doing so ends up reasoning in much the way you describe to defuse the initial worry.