This seems to be an issue of only considering one side of the possibility distribution. I think it’s very arguable that a post-nuclear-holocaust society is just as if not more likely to be more racist/sexist, more violent or suspicious of others, more cruel to animals (if only because our progress in e.g., lab-grown meat will be undone), etc. in the long term. This is especially the case if history just keeps going through cycles of civilizational collapse and rebuilding—in which case we might have to suffer for hundreds of thousands of years (and subject animals to that many more years of cruelty) until we finally develop a civilization that is capable of maximizing human/sentient flourishing (assuming we don’t go extinct!)
You cite the example of post-WW2 peace, but I don’t think it’s that simple:
there were many wars afterwards (e.g., the Korean War, Vietnam), they just weren’t all as global in scale. Thus, WW2 may have been more of a peak outlier at a unique moment in history.
It’s entirely possible WW2 could have led to another, even worse war—we just got lucky. (consider how people thought WW1 would be the war to end all wars because of its brutality, only for WW2 to follow a few decades later)
Inventions such as nuclear weapons, the strengthening of the international system in terms of trade and diplomacy, the disenchantment with fascism/totalitarianism (with the exception of communism), and a variety of other factors seemed to have helped to prevent a WW3; the brutality of WW2 was not the only factor.
Ultimately, I still consider that the argument that seemingly horrible things like nuclear holocausts (or The Holocaust) or world wars are more likely to produce good outcomes in the long term just generally seems improbable. (I just wish someone who is more familiar with longtermism would contribute)
You’re completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.
Indeed, as I have said, even if the probability of the future scenarios I am positing is of the order of 0.00001 (which makes it improbable), that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals precisely to the immense consequences of events whose absolute probability is very low.
If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001. So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001 New expected time remaining for civilization = M x C = 10,000,010,000
In much the same way, it’s absolutely correct that I am referring to one side of the distribution ; however it is not because the other-side does not exist or is not relevant bur rather because I want to highlight the magnitude of uncertainty and how that expands with time.
It follows also that I am in no way disputing (and my argument is somewhat orthogonal to) the different counterfactuals for WW2 you’ve outlined.
I see what you mean, and again I have some sympathy for the argument that it’s very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn’t seem to be a good reason to expect that (A) is more likely than (B).
My concern thus far has been that it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)” (If that isn’t your argument, feel free to clarify!). In contrast, my point is “Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.”
...it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)”
Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands.
Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.
Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified.
This seems to be an issue of only considering one side of the possibility distribution. I think it’s very arguable that a post-nuclear-holocaust society is just as if not more likely to be more racist/sexist, more violent or suspicious of others, more cruel to animals (if only because our progress in e.g., lab-grown meat will be undone), etc. in the long term. This is especially the case if history just keeps going through cycles of civilizational collapse and rebuilding—in which case we might have to suffer for hundreds of thousands of years (and subject animals to that many more years of cruelty) until we finally develop a civilization that is capable of maximizing human/sentient flourishing (assuming we don’t go extinct!)
You cite the example of post-WW2 peace, but I don’t think it’s that simple:
there were many wars afterwards (e.g., the Korean War, Vietnam), they just weren’t all as global in scale. Thus, WW2 may have been more of a peak outlier at a unique moment in history.
It’s entirely possible WW2 could have led to another, even worse war—we just got lucky. (consider how people thought WW1 would be the war to end all wars because of its brutality, only for WW2 to follow a few decades later)
Inventions such as nuclear weapons, the strengthening of the international system in terms of trade and diplomacy, the disenchantment with fascism/totalitarianism (with the exception of communism), and a variety of other factors seemed to have helped to prevent a WW3; the brutality of WW2 was not the only factor.
Ultimately, I still consider that the argument that seemingly horrible things like nuclear holocausts (or The Holocaust) or world wars are more likely to produce good outcomes in the long term just generally seems improbable. (I just wish someone who is more familiar with longtermism would contribute)
You’re completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.
Indeed, as I have said, even if the probability of the future scenarios I am positing is of the order of 0.00001 (which makes it improbable), that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals precisely to the immense consequences of events whose absolute probability is very low.
At the risk of quoting out of context:
In much the same way, it’s absolutely correct that I am referring to one side of the distribution ; however it is not because the other-side does not exist or is not relevant bur rather because I want to highlight the magnitude of uncertainty and how that expands with time.
It follows also that I am in no way disputing (and my argument is somewhat orthogonal to) the different counterfactuals for WW2 you’ve outlined.
I see what you mean, and again I have some sympathy for the argument that it’s very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn’t seem to be a good reason to expect that (A) is more likely than (B).
My concern thus far has been that it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)” (If that isn’t your argument, feel free to clarify!). In contrast, my point is “Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.”
Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands.
Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified.