This doesn’t necessarily totally eliminate all risk aversion, because the outcomes of actions can also be substantially correlated across correlated agents for various reasons, e.g. correlated agents will tend to be biased in the same directions, the difficulty of AI alignment is correlated across the multiverse, the probability of consciousness and moral weights of similar moral patients will be correlated across the multiverse, etc.. So, you could only apply the LLN or CLT after conditioning separately on the different possible values of such common factors to aggregate the conditional expected value across the multiverse, and then you recombine.
This doesn’t necessarily totally eliminate all risk aversion, because the outcomes of actions can also be substantially correlated across correlated agents for various reasons, e.g. correlated agents will tend to be biased in the same directions, the difficulty of AI alignment is correlated across the multiverse, the probability of consciousness and moral weights of similar moral patients will be correlated across the multiverse, etc.. So, you could only apply the LLN or CLT after conditioning separately on the different possible values of such common factors to aggregate the conditional expected value across the multiverse, and then you recombine.