I think there aren’t reliable things that are a) robustly good for the long-term future under a wide set of plausible assumptions, b) are highly legibly so, c) are easy to talk about in public, d) are within 2 OOMs of cost-effectiveness of the best interventions by our current best guesses, and e) aren’t already being done.
I think your question implies that a) is the crux, and I do have a lot of sympathy towards that view. But the reason why it’s difficult to generate answers to your question is at least partially due to expectations of b)-e) baked in as well.
Maybe my post simplified things too much, but I’m actually quite open to learn about possibilities for improving the long term future, even those that are hard to understand or difficult to talk about. I sympathize with longtermism, but can’t shake off the feeling that epistemic uncertainty is an underrated objection.
When it comes to your linked question about how near-termist interventions affect the far future, I sympathize with Arepo’s answer. I think the effect of many such actions decays towards zero somewhat quickly. This is potentially different for actions that explicitly try to affect the long-term, such as many types of AI work. That’s why I would like high confidence in the sign of such an action’s impact. Is that too strong a demand?
I think there aren’t reliable things that are a) robustly good for the long-term future under a wide set of plausible assumptions, b) are highly legibly so, c) are easy to talk about in public, d) are within 2 OOMs of cost-effectiveness of the best interventions by our current best guesses, and e) aren’t already being done.
I think your question implies that a) is the crux, and I do have a lot of sympathy towards that view. But the reason why it’s difficult to generate answers to your question is at least partially due to expectations of b)-e) baked in as well.
Thank you. This is valuable to hear.
Maybe my post simplified things too much, but I’m actually quite open to learn about possibilities for improving the long term future, even those that are hard to understand or difficult to talk about. I sympathize with longtermism, but can’t shake off the feeling that epistemic uncertainty is an underrated objection.
When it comes to your linked question about how near-termist interventions affect the far future, I sympathize with Arepo’s answer. I think the effect of many such actions decays towards zero somewhat quickly. This is potentially different for actions that explicitly try to affect the long-term, such as many types of AI work. That’s why I would like high confidence in the sign of such an action’s impact. Is that too strong a demand?