Let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly. In fact, this last type of change will predominate the farther we get into the future.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.
An update that came from the discussion:
Let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly. In fact, this last type of change will predominate the farther we get into the future.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.