in reality the bulk of an intervention’s impact is composed of indirect & long-run effects which are difficult to observe and difficult to estimate.
Robin Hanson has someposts which are skeptical. I think there’s probably a power law distribution of impact on the far future, and most actions are relatively unimpactful. You could argue that the scale of the universe is big enough in time & space that even a small relative impact on the far future will be large in absolute terms. But to compromise with near future focused value systems, maybe we should still be focused on near-term effects of interventions which seem relatively unimpactful in the long run.
BTW, your typology neglects work to prevent s-risks.
Robin Hanson has someposts which are skeptical. I think there’s probably a power law distribution of impact on the far future, and most actions are relatively unimpactful.
Thanks for the pointers to Hanson on this!
Agreed, and I think part of the trouble is that it’s very hard to tell prospectively whether an action is going to have a large impact on the far future.
Use what I’ve read about history to try & think of historical events I think were pivotal which share important similarities with the action in question, and also try to estimate the base rate of historical people taking actions similar to the action in question in order to have an estimate for the denominator.
But to compromise with near future focused value systems, maybe we should still be focused on near-term effects of interventions which seem relatively unimpactful in the long run.
I’m not sure what meta-ethical framework we would use to broker such a compromise. Perhaps some kind of moral congress (a)?
I haven’t yet figured out how to allot the proportions of such a congress in a way that feels principled. Do you know of any work on this?
Robin Hanson has some posts which are skeptical. I think there’s probably a power law distribution of impact on the far future, and most actions are relatively unimpactful. You could argue that the scale of the universe is big enough in time & space that even a small relative impact on the far future will be large in absolute terms. But to compromise with near future focused value systems, maybe we should still be focused on near-term effects of interventions which seem relatively unimpactful in the long run.
BTW, your typology neglects work to prevent s-risks.
Thanks for the pointers to Hanson on this!
Agreed, and I think part of the trouble is that it’s very hard to tell prospectively whether an action is going to have a large impact on the far future.
I’m not convinced of that.
Do you have examples of heuristics you use to prospectively assess whether an action is going to have a large impact on the far future?
Is it similar to the sort of actions I believe have had a large impact on the future in the past?
Got it. Is there an easy-to-articulate description of how you build the set of past actions that you believe had a large impact on the future?
Use what I’ve read about history to try & think of historical events I think were pivotal which share important similarities with the action in question, and also try to estimate the base rate of historical people taking actions similar to the action in question in order to have an estimate for the denominator.
If I was trying to improve my ability in this area, I might read books by Peter Turchin, Yuval Noah Harari, Niall Ferguson, Will and Ariel Durant, and people working on Big History. Maybe this book too. Some EA-adjacent discussion of this topic: 1, 2, 3, 4.
Good point; for the purposes of the argument they could be grouped with x-risks.
I’m not sure what meta-ethical framework we would use to broker such a compromise. Perhaps some kind of moral congress (a)?
I haven’t yet figured out how to allot the proportions of such a congress in a way that feels principled. Do you know of any work on this?
Not offhand, but I would probably use some kind of Bayesian approach.
See toy dialogue.