How are the mentioned first and second objections distinct?
“Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal?”
I fail to get the meaning. Could anybody reword this for me?
“The consideration is that, even if we think the value of the future is positive and large, the value of the futureconditional on the fact that we marginally averted a given x-riskmay not be.”
Not sure I get this. Is a civilisation stalling irrevocably into chaos after narrowly surviving a pandemic a central example of this?
About the two objections: What I’m saying is that, as far as I can tell, the first common longtermist objection to working on x-risk reduction is that it’s actually bad, because future human civilization is of negative expected value. The second is that, even if it is good to reduce x-risk, the resources spent doing that could better be used to effect a trajectory change. Perhaps the resources needed to reduce x-risk by (say) 0.001% could instead improve the future by (say) 0.002% conditional on survival.
About the decision theory thing: You might think (a) that the act of saving the world will in expectation cause more harm than good, in some context, but also (b) that, upon observing yourself engaged in the x-risk-reduction act, you would learn something about the world which correlates positively with your subjective expectation of the value of the future conditional on survival. In such cases, EDT would recommend the act, but CDT would not. If you’re familiar with this decision theory stuff, this is just a generic application of it; there’s nothing too profound going on here.
About the main thing: It sounds like you’re pointing out that stocking bunkers full of canned beans, say, would “save the world” only after most of it has already been bombed to pieces, and in that event the subsequent future couldn’t be expected to go so well anyway. This is definitely an example of the point I’m trying to make—it’s an extreme case of “the expected value of the future not equaling the expected value of the future conditional on the fact that we marginally averted a given x-risk”—but I don’t think it’s the most general illustration. What I’m saying is that an attempt to save the world even by preventing it from being bombed to pieces doesn’t do as much good as you might think, because your prevention effort only saves the world if it turns that there would have been the nuclear disaster but for your efforts. If it turns out (even assuming that we will never find out) that your effort is what saved us all from nuclear annihilation, that means we probably live in a world that is more prone to nuclear annihilation than we otherwise would have thought. And that, in turn, doesn’t bode well for the future.
How are the mentioned first and second objections distinct?
“Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal?”
I fail to get the meaning. Could anybody reword this for me?
“The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.”
Not sure I get this. Is a civilisation stalling irrevocably into chaos after narrowly surviving a pandemic a central example of this?
About the two objections: What I’m saying is that, as far as I can tell, the first common longtermist objection to working on x-risk reduction is that it’s actually bad, because future human civilization is of negative expected value. The second is that, even if it is good to reduce x-risk, the resources spent doing that could better be used to effect a trajectory change. Perhaps the resources needed to reduce x-risk by (say) 0.001% could instead improve the future by (say) 0.002% conditional on survival.
About the decision theory thing: You might think (a) that the act of saving the world will in expectation cause more harm than good, in some context, but also (b) that, upon observing yourself engaged in the x-risk-reduction act, you would learn something about the world which correlates positively with your subjective expectation of the value of the future conditional on survival. In such cases, EDT would recommend the act, but CDT would not. If you’re familiar with this decision theory stuff, this is just a generic application of it; there’s nothing too profound going on here.
About the main thing: It sounds like you’re pointing out that stocking bunkers full of canned beans, say, would “save the world” only after most of it has already been bombed to pieces, and in that event the subsequent future couldn’t be expected to go so well anyway. This is definitely an example of the point I’m trying to make—it’s an extreme case of “the expected value of the future not equaling the expected value of the future conditional on the fact that we marginally averted a given x-risk”—but I don’t think it’s the most general illustration. What I’m saying is that an attempt to save the world even by preventing it from being bombed to pieces doesn’t do as much good as you might think, because your prevention effort only saves the world if it turns that there would have been the nuclear disaster but for your efforts. If it turns out (even assuming that we will never find out) that your effort is what saved us all from nuclear annihilation, that means we probably live in a world that is more prone to nuclear annihilation than we otherwise would have thought. And that, in turn, doesn’t bode well for the future.
Does any of that make things clearer?