Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) := We are living at the most influential time ever.
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view. It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.
Two clear and common channels I have seen are:
Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk)
People pay attention to the fact that the last few centuries have accounted for so much of all technological progress, and the likely gains to be had in the next few centuries (based on our knowledge of physical laws, existence proofs, from biology, and trend extrapolation), noticing things that can have incredibly long-lasting effects that dwarf short-run concerns
I think personally I had a sort of amplifying feedback loop between longtermism and assigning a “significant” credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following:
1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA.
2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite “low” by usual standards)
3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction.
4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3.
5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with “Ok, let’s shift some probability mass towards the best thing to work on being global poverty and/or animal welfare” but instead “Ok, let’s shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks.” This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important.
I’m not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that’s roughly the process I went through.
Two clear and common channels I have seen are:
Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk)
People pay attention to the fact that the last few centuries have accounted for so much of all technological progress, and the likely gains to be had in the next few centuries (based on our knowledge of physical laws, existence proofs, from biology, and trend extrapolation), noticing things that can have incredibly long-lasting effects that dwarf short-run concerns
Interesting comment.
I think personally I had a sort of amplifying feedback loop between longtermism and assigning a “significant” credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following:
1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA.
2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite “low” by usual standards)
3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction.
4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3.
5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with “Ok, let’s shift some probability mass towards the best thing to work on being global poverty and/or animal welfare” but instead “Ok, let’s shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks.” This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important.
I’m not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that’s roughly the process I went through.