I like your comparisons with other historical cases when people thought they had inevitable theories about society, and it is a thing I think about.
I do have a pet peeve though about the following claim.
Expected values were being used by the authors inappropriately (that is, without data to inform the probability estimates).
Let’s consider a very short argument for strong longterminism (and a tractable way to influence the distant future by reducing x-risk): - There is a lot of future ahead of us. - The universe is large - humans are fragile/the universe is harsh (most planets are not inhabitable for us (yet). We don’t survive in most space by default) ⇒ Therefore expected outcomes of your actions for the near future become rounding errors compared to future expected outcomes by making sure humanity survives. All three of these points (while more might be necessary for a convincing case for longterminism) are very much informed by physical theories which in turn have been informed by data about the world we live in (observing through a telescope, going to the moon)! To illustrate:
- Had I been born in a universe where physicists were predicting with high degrees of certainty (through well-established theories like thermodynamics in our world) that the universe (all of which already inhabited) would be facing an inevitable heat death in 1000 years from now, then I would think that the arguments for longterminism were weak since they would not apply to the universe we live in.
I am not convinced by your arguments around epistemology. I don’t understand your fascination with Popper. Popper’s philosophy seems more like an informal way to make bayesian updates. You did not provide sufficient evidence for me to convince me to the contrary. While I agree that rigid Bayseanism has flaws, my current best guess means more subjectivism, not less.
I like your comparisons with other historical cases when people thought they had inevitable theories about society, and it is a thing I think about.
I do have a pet peeve though about the following claim.
Let’s consider a very short argument for strong longterminism (and a tractable way to influence the distant future by reducing x-risk):
- There is a lot of future ahead of us.
- The universe is large
- humans are fragile/the universe is harsh (most planets are not inhabitable for us (yet). We don’t survive in most space by default)
⇒ Therefore expected outcomes of your actions for the near future become rounding errors compared to future expected outcomes by making sure humanity survives.
All three of these points (while more might be necessary for a convincing case for longterminism) are very much informed by physical theories which in turn have been informed by data about the world we live in (observing through a telescope, going to the moon)!
To illustrate:
- Had I been born in a universe where physicists were predicting with high degrees of certainty (through well-established theories like thermodynamics in our world) that the universe (all of which already inhabited) would be facing an inevitable heat death in 1000 years from now, then I would think that the arguments for longterminism were weak since they would not apply to the universe we live in.
I am not convinced by your arguments around epistemology. I don’t understand your fascination with Popper. Popper’s philosophy seems more like an informal way to make bayesian updates. You did not provide sufficient evidence for me to convince me to the contrary. While I agree that rigid Bayseanism has flaws, my current best guess means more subjectivism, not less.