Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).
Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).