I see what you mean, and again I have some sympathy for the argument that it’s very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn’t seem to be a good reason to expect that (A) is more likely than (B).
My concern thus far has been that it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)” (If that isn’t your argument, feel free to clarify!). In contrast, my point is “Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.”
...it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)”
Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands.
Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.
Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified.
I see what you mean, and again I have some sympathy for the argument that it’s very difficult to be confident about a given probability distribution in terms of both positive and negative consequences. However, to summarize my concerns here, I still think that even if there is a large amount of uncertainty, there is typically still reason to think that some things will have a positive expected value: preventing a given event (e.g., a global nuclear war) might have a ~0.001% of making existence worse in the long-term (possibility A), but it seems fair to estimate that preventing the same event also has a ~0.1% chance of producing an equal amount of long-term net benefit (B). Both estimates can be highly uncertain, but there doesn’t seem to be a good reason to expect that (A) is more likely than (B).
My concern thus far has been that it seems like your argument is saying “(A) and (B) are both really hard to estimate, and they’re both really low likelihood—but neither is negligible. Thus, we can’t really know whether our interventions are helping. (With the implicit conclusion being: thus, we should be more skeptical about attempts to improve the long-term future)” (If that isn’t your argument, feel free to clarify!). In contrast, my point is “Sometimes we can’t know the probability distribution of (A) vs. (B), but sometimes we can do better-than-nothing estimates, and for some things (e.g., some aspects of X-risk reduction) it seems reasonable to try.”
Thanks, that is fairly accurate summary of one of the crucial points I am making except I would also add that the difficulty of estimation increases with time. And this is a major concern here because the case of longtermism rests precisely on there being greater and greater number of humans (and other sentient independent agents) as the horizon of time expands.
Fully agree that we should try but the case of longtermism remains rather weak until we have some estimates and bounds that can be reasonably justified.