I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).
I’m not sure that choice is a real one humanity actually faced though. It seems unlikely that humans alive 10,000 years ago actually had the capability to commit omnicide, still less the ability to avert future omnicide for the cost of lunch. It’s not a strong reductio ad absurdum because it implies a level of epistemic certainty that didn’t and doesn’t exist.
The closest ancient-world analogue is humans presented with entirely false choices to sacrifice their lunch to long-forgotten deities to preserve the future of humanity. Factoring in the possible existence of billions of humans 10,000 years into the future wouldn’t have allowed them to make decisions that better ensured our survival, so I have absolutely no qualms with those who discounted the value of our survival low enough to decline to proffer their lunch.
Even if humanity 10000 years ago had been acting on good information (perhaps a time traveller from this century warned them that cultivating grasses would set them on a path towards civilization capable of omnicide) rather than avoiding a Pascal-mugging, it’s far from clear that humanity deciding to go hungry to prevent the evils of civilization from harming billions of future humans would [i] not have ended up discovering the scientific method and founding civilizations capable of splitting atoms and engineering pathogens a bit later on anyway [ii] have ended up with as many happy humans if their cultural taboos against civilization had somehow persisted. So I’m unconvinced of a moral imperative to change course even with that foreknowledge. We don’t have comparable foreknowledge of any course the next 10000y could take, and our knowledge of actual and potential existential threats gives us more reason to discount the potential big expansive future even if we act now, especially if the proposed risk-mitigation is as untenable and unsustainable as “end science”.
If humanity ever reached the stage where we could meaningfully trade inconsequential things for cataclysms that only affect people in the far future [with high certainty], that might be time to revisit the discount rate, but it’s supposed to reflect our current epistemic uncertainty.
I meant in the sense that humans were alive 10,000 years, and could have caused the extinction of humanity then (and in that decision, by the logic of the OP, they would have assigned zero weight to us existing).
I’m not sure that choice is a real one humanity actually faced though. It seems unlikely that humans alive 10,000 years ago actually had the capability to commit omnicide, still less the ability to avert future omnicide for the cost of lunch. It’s not a strong reductio ad absurdum because it implies a level of epistemic certainty that didn’t and doesn’t exist.
The closest ancient-world analogue is humans presented with entirely false choices to sacrifice their lunch to long-forgotten deities to preserve the future of humanity. Factoring in the possible existence of billions of humans 10,000 years into the future wouldn’t have allowed them to make decisions that better ensured our survival, so I have absolutely no qualms with those who discounted the value of our survival low enough to decline to proffer their lunch.
Even if humanity 10000 years ago had been acting on good information (perhaps a time traveller from this century warned them that cultivating grasses would set them on a path towards civilization capable of omnicide) rather than avoiding a Pascal-mugging, it’s far from clear that humanity deciding to go hungry to prevent the evils of civilization from harming billions of future humans would [i] not have ended up discovering the scientific method and founding civilizations capable of splitting atoms and engineering pathogens a bit later on anyway [ii] have ended up with as many happy humans if their cultural taboos against civilization had somehow persisted. So I’m unconvinced of a moral imperative to change course even with that foreknowledge. We don’t have comparable foreknowledge of any course the next 10000y could take, and our knowledge of actual and potential existential threats gives us more reason to discount the potential big expansive future even if we act now, especially if the proposed risk-mitigation is as untenable and unsustainable as “end science”.
If humanity ever reached the stage where we could meaningfully trade inconsequential things for cataclysms that only affect people in the far future [with high certainty], that might be time to revisit the discount rate, but it’s supposed to reflect our current epistemic uncertainty.