Ignoring everything after 1 min, yeah I’d be very confident listening to music is better. :) You could technically still say “what if you’re in a simulation and the simulators severely punish listening to music,” but this seems to be the sort of contrived hypothesis that Occam’s razor can practically rule out (ETA: not sure I endorse this part, I think the footnote is more on-point).[1]
for how long would the effects after the actions have to be taken into account for your indeterminacy to come back?
Inprinciple, we could answer this by:
trying to model the possible consequences that could play out up to some time point T; estimating the net welfare under each hypothesis that falls out of this model; and seeing how much precision we can justify without making arbitrary choices;
checking at which time point T* the UEVs of the options become incomparable, according to these models.
(Of course, that’s intractable in practice, but there’s a non-arbitrary boundary between “comparable” and “incomparable” that falls out of my framework. Just like there’s a non-arbitrary boundary between precise beliefs that would imply positive vs. negative EV. We don’t need to compute T* in order to see that there’s incomparability when we consider T = ∞. (I might be missing the point of your question, though!))
I agree there is be a non-arbitrary boundary between “comparable” and “incomparable” that results from your framework. However, I think the empirics of some comparisons like the one above are such that we can still non-arbitrarily say that one option is better than the other for an infinite time horizon. Which empirical beliefs you hold would have to change for this to be the case? For me, the crucial consideration is whether the expected effects of actions decrease or increase over time and space. Ithink they decrease, and that one can get a sufficiently good grasp of the dominant nearterm effects to meaningfully compare actions.
Which empirical beliefs you hold would have to change for this to be the case?
For starters, we’d either need:
all the factors discussed in this section to be much simpler (or otherwise structured in a way that we could model with the requisite precision); or
sufficiently strong evidence that our intuitions can implicitly weigh up such complex factors with the requisite precision.
(Sorry if this is more high-level than you’re asking for. The concrete empirical factors are elaborated in the linked section.)
Re: your claim that “expected effects of actions decrease over time and space”: To me the various mechanisms for potential lock-in within our lifetimes seem not too implausible. So it seems overconfident to have a vanishingly small credence that your action makes the difference between two futures of astronomically different value. See also Mogensen’s examples of mechanisms by which an AMF donation could affect extinction risk. But please let me know if there’s some nuance in the arguments of the posts you linked that I’m not addressing.
As far as I can tell, the factors you mention refer to the possibility of influencing astronomically valuable worlds. I agree locking in some properties of the world may be possible. However, even in this case, I would expect the interventions causing the lock-in to increase the probability of astronomically valuable worlds by an astronomically small amount. I think the counterfactual interventions would cause a similarly valuable lock-in slightly later, and that the difference between the factual and counterfactual expected impartial welfare would quickly tend to 0 over time, such that is is negligible after 100 years or so.
Ignoring everything after 1 min, yeah I’d be very confident listening to music is better. :) You could technically still say “what if you’re in a simulation and the simulators severely punish listening to music,” but this seems to be the sort of contrived hypothesis that Occam’s razor can practically rule out (ETA: not sure I endorse this part, I think the footnote is more on-point).[1]
In principle, we could answer this by:
trying to model the possible consequences that could play out up to some time point T; estimating the net welfare under each hypothesis that falls out of this model; and seeing how much precision we can justify without making arbitrary choices;
checking at which time point T* the UEVs of the options become incomparable, according to these models.
(Of course, that’s intractable in practice, but there’s a non-arbitrary boundary between “comparable” and “incomparable” that falls out of my framework. Just like there’s a non-arbitrary boundary between precise beliefs that would imply positive vs. negative EV. We don’t need to compute T* in order to see that there’s incomparability when we consider T = ∞. (I might be missing the point of your question, though!))
Or, if that’s false, presumably these weird hypotheses are just as much of a puzzle for precise Bayesianism (or similar) too, if not worse?
I agree there is be a non-arbitrary boundary between “comparable” and “incomparable” that results from your framework. However, I think the empirics of some comparisons like the one above are such that we can still non-arbitrarily say that one option is better than the other for an infinite time horizon. Which empirical beliefs you hold would have to change for this to be the case? For me, the crucial consideration is whether the expected effects of actions decrease or increase over time and space. I think they decrease, and that one can get a sufficiently good grasp of the dominant nearterm effects to meaningfully compare actions.
For starters, we’d either need:
all the factors discussed in this section to be much simpler (or otherwise structured in a way that we could model with the requisite precision); or
sufficiently strong evidence that our intuitions can implicitly weigh up such complex factors with the requisite precision.
(Sorry if this is more high-level than you’re asking for. The concrete empirical factors are elaborated in the linked section.)
Re: your claim that “expected effects of actions decrease over time and space”: To me the various mechanisms for potential lock-in within our lifetimes seem not too implausible. So it seems overconfident to have a vanishingly small credence that your action makes the difference between two futures of astronomically different value. See also Mogensen’s examples of mechanisms by which an AMF donation could affect extinction risk. But please let me know if there’s some nuance in the arguments of the posts you linked that I’m not addressing.
As far as I can tell, the factors you mention refer to the possibility of influencing astronomically valuable worlds. I agree locking in some properties of the world may be possible. However, even in this case, I would expect the interventions causing the lock-in to increase the probability of astronomically valuable worlds by an astronomically small amount. I think the counterfactual interventions would cause a similarly valuable lock-in slightly later, and that the difference between the factual and counterfactual expected impartial welfare would quickly tend to 0 over time, such that is is negligible after 100 years or so.