This argument is basically my biggest source of doubt for risk aversion, but I don’t think the response to dependant outcomes is adequate here.
You’d have to cherry pick a subsequence so that the correlations can be arranged to tend to 0, but if you’re picking a subsequence this way, you’re ignoring the infinitely many outcomes with correlations bounded away from 0, and the argument doesn’t pass to the whole sequence.
And we should expect correlations bounded away from 0 in an infinite universe. One reason is just because there should be infinitely many (nearly) identical agents in (nearly) identical situations. Another reason is that we have uncertainty about features of our world that’s very plausibly correlated across agents, like how hard it is to align AI, how prone individuals with power are to catastrophic conflict/destruction, whether or not the agent is in a relatively short-lived simulation, the density of aliens in our universe, the maximum possible density of suffering, whether or not P=NP, or what’s necessary for consciousness. You can try to condition on those first and then use the LLN or CLT (or generalizations), but I’m not sure risk neutrality will definitely come out ahead when you combine the results from each condition, because the different conditions could have different maximizers and rank options very different. In some cases, your highest EV options could backfire and become among the worst under the right conditions. Still, I’d guess this gives some reason to be somewhat less risk averse, but I don’t know how much, and it could depend on the specific decision.
Plus, identical distributions can’t capture all correlations that matter.
In the extreme, for sequences of independent trials with payoffs increasing without bound but probability of positive payoff decreasing quickly (and unbounded variance), risk neutrality leads to almost surely worse outcomes:
Also, as you mention, it’s possible there just aren’t enough roughly uncorrelated outcomes, in case the universe is finite (although my best guess is that the universe is infinite in spatial extent).
Maybe you could try to group outcome distributions in such a way that the different groups’ sums of outcomes have vanishing “covariance” with the other groups’ outcome sums, and hope you get enough groups left and they satisfy some condition to let you apply something like the LLN or CLT.
On the other hand, I’d guess it very often won’t be the case that a risky option is very probably better than each low risk option*, but that standard seems higher than necessary. We’re not usually going to get (near) certainty in either direction, so it could be suspicious to always choose low risk options anyway. If it’s usually the case that a risky option is probably better than the low risk option* and not even worse (than it is better) when it is worse, that seems like about enough reason to reject risk aversion in practice.
I’m not sure this exact statement is enough to avoid counterexamples, but something in this direction seems right.
*separately for each low risk option (not better than the statewise max of the low risk options), but the same risky option. We can also compare quantiles rather than be sensitive to the specific way options are related statewise. It seems like stochastic dominance specifically is too much to expect, though, including using background value as in Tarsney’s paper, if too much of the background is correlated, e.g. uncertainty about the requirements for consciousness and how much value different minds can generate can make basically all background value highly correlated with the local causal value of options.
Also the correct footnote statement of the conditions for the LLN result you use with decreasing correlations has pretty strong conditions and your informal statement of it in the main text has trivial counterexamples, e.g. with just one outcome independent from the rest, and the rest all identical as random variables.
For any positive epsilon, you need all but finitely many of the covariances to be less than epsilon in absolute value. This means that it can’t be the case that infinitely many of the outcomes have non-negligible (bounded below in absolute value by epsilon) covariance with any other ourcome. But if we expect non-negligible correlations at all between causally separated outcomes in an infinite universe, I think we should expect non-negligible correlations between infinitely many pairs of them.
This argument is basically my biggest source of doubt for risk aversion, but I don’t think the response to dependant outcomes is adequate here.
You’d have to cherry pick a subsequence so that the correlations can be arranged to tend to 0, but if you’re picking a subsequence this way, you’re ignoring the infinitely many outcomes with correlations bounded away from 0, and the argument doesn’t pass to the whole sequence.
And we should expect correlations bounded away from 0 in an infinite universe. One reason is just because there should be infinitely many (nearly) identical agents in (nearly) identical situations. Another reason is that we have uncertainty about features of our world that’s very plausibly correlated across agents, like how hard it is to align AI, how prone individuals with power are to catastrophic conflict/destruction, whether or not the agent is in a relatively short-lived simulation, the density of aliens in our universe, the maximum possible density of suffering, whether or not P=NP, or what’s necessary for consciousness. You can try to condition on those first and then use the LLN or CLT (or generalizations), but I’m not sure risk neutrality will definitely come out ahead when you combine the results from each condition, because the different conditions could have different maximizers and rank options very different. In some cases, your highest EV options could backfire and become among the worst under the right conditions. Still, I’d guess this gives some reason to be somewhat less risk averse, but I don’t know how much, and it could depend on the specific decision.
Plus, identical distributions can’t capture all correlations that matter.
In the extreme, for sequences of independent trials with payoffs increasing without bound but probability of positive payoff decreasing quickly (and unbounded variance), risk neutrality leads to almost surely worse outcomes:
https://alexanderpruss.blogspot.com/2022/10/expected-utility-maximization.html
https://alexanderpruss.blogspot.com/2022/10/the-law-of-large-numbers-and-infinite.html
Also, as you mention, it’s possible there just aren’t enough roughly uncorrelated outcomes, in case the universe is finite (although my best guess is that the universe is infinite in spatial extent).
Maybe you could try to group outcome distributions in such a way that the different groups’ sums of outcomes have vanishing “covariance” with the other groups’ outcome sums, and hope you get enough groups left and they satisfy some condition to let you apply something like the LLN or CLT.
On the other hand, I’d guess it very often won’t be the case that a risky option is very probably better than each low risk option*, but that standard seems higher than necessary. We’re not usually going to get (near) certainty in either direction, so it could be suspicious to always choose low risk options anyway. If it’s usually the case that a risky option is probably better than the low risk option* and not even worse (than it is better) when it is worse, that seems like about enough reason to reject risk aversion in practice.
I’m not sure this exact statement is enough to avoid counterexamples, but something in this direction seems right.
*separately for each low risk option (not better than the statewise max of the low risk options), but the same risky option. We can also compare quantiles rather than be sensitive to the specific way options are related statewise. It seems like stochastic dominance specifically is too much to expect, though, including using background value as in Tarsney’s paper, if too much of the background is correlated, e.g. uncertainty about the requirements for consciousness and how much value different minds can generate can make basically all background value highly correlated with the local causal value of options.
Also the correct footnote statement of the conditions for the LLN result you use with decreasing correlations has pretty strong conditions and your informal statement of it in the main text has trivial counterexamples, e.g. with just one outcome independent from the rest, and the rest all identical as random variables.
For any positive epsilon, you need all but finitely many of the covariances to be less than epsilon in absolute value. This means that it can’t be the case that infinitely many of the outcomes have non-negligible (bounded below in absolute value by epsilon) covariance with any other ourcome. But if we expect non-negligible correlations at all between causally separated outcomes in an infinite universe, I think we should expect non-negligible correlations between infinitely many pairs of them.