I agree with most of your reasoning, but disagree significantly about this:
>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.
It’s true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn’t very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what’s needed to convince a pure utilitarian to focus on x-risks.
From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.
I’m somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.
For these reasons, and WilliamKiely’s comments about priors, I want to treat HoH as more than 1% likely.
I agree with most of your reasoning, but disagree significantly about this:
>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.
It’s true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn’t very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what’s needed to convince a pure utilitarian to focus on x-risks.
From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.
I’m somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.
For these reasons, and WilliamKiely’s comments about priors, I want to treat HoH as more than 1% likely.