Maybe my post simplified things too much, but I’m actually quite open to learn about possibilities for improving the long term future, even those that are hard to understand or difficult to talk about. I sympathize with longtermism, but can’t shake off the feeling that epistemic uncertainty is an underrated objection.
When it comes to your linked question about how near-termist interventions affect the far future, I sympathize with Arepo’s answer. I think the effect of many such actions decays towards zero somewhat quickly. This is potentially different for actions that explicitly try to affect the long-term, such as many types of AI work. That’s why I would like high confidence in the sign of such an action’s impact. Is that too strong a demand?
Thank you. This is valuable to hear.
Maybe my post simplified things too much, but I’m actually quite open to learn about possibilities for improving the long term future, even those that are hard to understand or difficult to talk about. I sympathize with longtermism, but can’t shake off the feeling that epistemic uncertainty is an underrated objection.
When it comes to your linked question about how near-termist interventions affect the far future, I sympathize with Arepo’s answer. I think the effect of many such actions decays towards zero somewhat quickly. This is potentially different for actions that explicitly try to affect the long-term, such as many types of AI work. That’s why I would like high confidence in the sign of such an action’s impact. Is that too strong a demand?