In this thread, you try to argue as well as you can against the cause you currently consider the highest expected value cause to be working on. Then you calibrate your emotions given the new evidence you just generated.
This is not just a fun exercise. It has been shown that if you want to get the moral intuitions of a person to change, the best way to do so is to cause the person to scrutinize in detail the policy they are in favor of, not show evidence for why other policies are a good idea or why the person is wrong. To get your mind to change, the best way is to give it a zooming lens into itself.
So what is your cause?
The cause I currently think is most important is what I call “getting the order right”. It assumes that for all technological interventions that might drastically reshape the future, there are conditional dependencies on when they are discovered or invented, such that under different contexts and timelines, each would be significantly more or less dangerous, in X-risk terms.
So, here is why this may not be the best cause:
To begin with, it seems plausible that the Tricky Expectation View discussed in pg-85 of Beckstead’s thesis holds despite his arguments. This would drastically reduce the overall importance of existential risk reduction. One way in which TEV would hold, or an argument for views in that family, comes from considering the set of all possible human minds, and noticing that many considerations, both probabilistic and moral, stop being intuitive when deciding whether to pluck out one of these infinitesimally small entity from non-existence to existence is actually a good deal. No matter what we do, most minds will never exist.
Depending on how we carve the conceptual distinction that determines a mind, we could get even lower orders of probability of existence for any given mind. Furthermore, if being of a different type (in the philosophical ‘type’ ‘token’ distinction) than something that has already existed is not a relevant distinction, the argument gets even easier: for each possible token of mind, that token will most likely never live with overwhelming chance.
If there are infinitesimally small differences between minds then there are at least Aleph1 non-existent minds, and Aleph2 non-existent mind tokens.
These infinities seem to point to some sort of asymmetric view, in which there is some form of affiliation with existence that is indeed correlated with being valuable. It may not be as straighforward as “only living minds matter”, or even *The Tricky Expectation View” but something in that vicinity. Some sort of discount rate that is fully justified, even in the face of astronomical waste, moral uncertainty etc. This would be one angle of attack.
Another angle is assuming that X-risk indeed trumps all other problems but that it can be reduced more efficiently by doing things other than figuring out the most desirable order. It may be that there are yet unknown anthropogenic X-risks, in which case focus on locating ways in which humans could soon destroy themselves would be more valuable than solving the known ones. An argument for that may take this form:
A) There are true relevant unknown facts about the Dung Beetle
B) Our bayesian shift on how many unknown unknowns are left in a domain should roughly correlate with amount of research that has already been done in a topic.
C) Substantially more research has been done on Dung Beetles than existential risks.
Conclusion: There are true unknown relevant facts about X-risk
‘Relevant’ here would range over [X-risks] which would mean either a substantial revision of conditional probabilites on different X-risks or else just a substantial revision on the whole network once an unkown risk is accounted for.
So getting the order right would be less relevant than spending resources on finding unknown unkowns.
Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).
In this thread, you try to argue as well as you can against the cause you currently consider the highest expected value cause to be working on. Then you calibrate your emotions given the new evidence you just generated. This is not just a fun exercise. It has been shown that if you want to get the moral intuitions of a person to change, the best way to do so is to cause the person to scrutinize in detail the policy they are in favor of, not show evidence for why other policies are a good idea or why the person is wrong. To get your mind to change, the best way is to give it a zooming lens into itself. So what is your cause?
This is an iterated version of Nick Bostrom’s technique of writing a hypothetical apostasy
The cause I currently think is most important is what I call “getting the order right”. It assumes that for all technological interventions that might drastically reshape the future, there are conditional dependencies on when they are discovered or invented, such that under different contexts and timelines, each would be significantly more or less dangerous, in X-risk terms. So, here is why this may not be the best cause:
To begin with, it seems plausible that the Tricky Expectation View discussed in pg-85 of Beckstead’s thesis holds despite his arguments. This would drastically reduce the overall importance of existential risk reduction. One way in which TEV would hold, or an argument for views in that family, comes from considering the set of all possible human minds, and noticing that many considerations, both probabilistic and moral, stop being intuitive when deciding whether to pluck out one of these infinitesimally small entity from non-existence to existence is actually a good deal. No matter what we do, most minds will never exist.
Depending on how we carve the conceptual distinction that determines a mind, we could get even lower orders of probability of existence for any given mind. Furthermore, if being of a different type (in the philosophical ‘type’ ‘token’ distinction) than something that has already existed is not a relevant distinction, the argument gets even easier: for each possible token of mind, that token will most likely never live with overwhelming chance.
If there are infinitesimally small differences between minds then there are at least Aleph1 non-existent minds, and Aleph2 non-existent mind tokens.
These infinities seem to point to some sort of asymmetric view, in which there is some form of affiliation with existence that is indeed correlated with being valuable. It may not be as straighforward as “only living minds matter”, or even *The Tricky Expectation View” but something in that vicinity. Some sort of discount rate that is fully justified, even in the face of astronomical waste, moral uncertainty etc. This would be one angle of attack.
Another angle is assuming that X-risk indeed trumps all other problems but that it can be reduced more efficiently by doing things other than figuring out the most desirable order. It may be that there are yet unknown anthropogenic X-risks, in which case focus on locating ways in which humans could soon destroy themselves would be more valuable than solving the known ones. An argument for that may take this form:
A) There are true relevant unknown facts about the Dung Beetle
B) Our bayesian shift on how many unknown unknowns are left in a domain should roughly correlate with amount of research that has already been done in a topic.
C) Substantially more research has been done on Dung Beetles than existential risks.
Conclusion: There are true unknown relevant facts about X-risk
‘Relevant’ here would range over [X-risks] which would mean either a substantial revision of conditional probabilites on different X-risks or else just a substantial revision on the whole network once an unkown risk is accounted for.
So getting the order right would be less relevant than spending resources on finding unknown unkowns.
Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).