The Hinge of History Hypothesis: Reply to MacAskill (Andreas Mogensen)
This paper was originally published as a working paper in August 2022 and is forthcoming in Analysis.
Abstract
Some believe that the current era is uniquely important with respect to how well the rest of human history goes. Following Parfit, call this the Hinge of History Hypothesis. Recently, MacAskill has argued that our era is actually very unlikely to be especially influential in the way asserted by the Hinge of History Hypothesis. I respond to MacAskill, pointing to important unresolved ambiguities in his proposed definition of what it means for a time to be influential and criticizing the two arguments used to cast doubt on the claim that the current era is a uniquely important moment in human history.
Introduction
Some believe that the current era is a uniquely important moment in human history. We are living, they claim, at a time of unprecedented risk, heralded by the advent of nuclear weapons and other world-shaping technologies. Only by responding wisely to the anthropogenic risks we now face can we survive into the future and fulfil our potential as a species (Sagan 1994; Parfit 2011, Bostrom 2014, Ord 2020).
Following Parfit (2011), call the hypothesis that we live at such a uniquely important time the Hinge of History Hypothesis (3H). Recently, MacAskill (2022) has argued that 3H is “quite unlikely to be true.” (332) He interprets 3H as the claim that “[w]e are among the very most influential people ever, out of a truly astronomical number of people who will ever live” (339) and defines a period of time as influential in proportion to “how much expected good one can do with the direct expenditure (rather than investment) of a unit of resources at [that] time” (335), where ‘investment’ may refer “to both financial investment, and to using one’s time to grow the number of people who are also impartial altruists.” (335 n.13) MacAskill thus relates the truth or falsity of 3H to the practical question of the optimal time at which to expend resources to achieve morally good outcomes, considered impartially.
MacAskill presents two arguments against 3H. The first is an argument that the prior probability that we are living at the most influential time in history should be very low, because we should reason as if we represent a random sample from observers in our reference class. The second is an inductive argument that we should expect future people to have more influence over human history because the overall trend throughout human history is for later generations to be more influential.
In my view, neither of these arguments should convince us. As I argue in section 2, MacAskill’s priors argument relies on formulating 3H in a way that does not conform to how this hypothesis is traditionally understood. Moreover, I will argue in section 3 that MacAskill’s definition of what it means for a time to be influential leaves too many unresolved ambiguities for his inductive argument to work.
I’m interested in the discussion of whether in fact we are at a hinge of history, maybe this is a good comments section for that. I agree that Will’s analysis barely scratches the surface and has some flaws.
Factors under consideration for me:
Existence of technologies that can have direct impacts on future society through making the world much better or much worse: computation and AI, the internet & social media, nanotech, biotech, the printing press, energy production / Dyson spheres
Do population/economic growth rates matter? i.e., if we are growing fast now vs slow, what would that imply?
Institutional attitudes: Do we have institutions that change behavior in controllable ways? What do people believe about the future impact of tech/ideas like money, life extension, social media, systems of government like the UN/democracy/Marxism/fascism, principles like liberalism/economics, strategies for national wealth like expansionism/colonialism/mercantilism, and so on?
Attitudes about change: are we able to convince people of things? Do people change their minds quickly or slowly? What systems exist to get information out, and what feedback mechanisms do they have?
Moral attitudes: How much do people care about others? To what degree do they care about those distant from them? Do people prioritize suffering, pleasure, satisfaction, etc? Do they believe they can change the world? Do they believe that there are moral errors that they or others are regularly making?
Satisfaction & Dissatisfaction attitudes: How much do people believe the world should be better than it is, and how motivated are they to “invest” to make things go better? e.g., Cold War & space exploration, colonialism era, building bridges and tunnels and other infra?
I see arguments for hingiest era being in the past, present or future:
arguments for past, eg 1780 or thereabouts: there were far fewer people, and they could have predicted (based on observing spread of religion) that the printing press, Industrial Revolution, European colonialism/mercantilism, and/or economic liberalism and democracy would have had huge impact. They also may have been able to predict moral progress eg slavery is bad. They probably would have been able to see that certain institutions had a ton of influence and were in turn influenceable.
Instinct is that they would have failed to predict as much progress in public health as we got, thereby expecting that future people would live in greater suffering than they do. Maybe this would have reduced their motivation to imagine a future with far more people.
They also could probably not have imagined computing and the internet in any particular detail.
arguments for this century (2000 to 2100): computing is going fucking crazy, there has never been a technology like this that has enabled such short feedback loops to society. Social media has shown that attitudes can change really quickly when info-consumption is addictive and anyone can publish widely. But these tech changes can’t go on; we will certainly reach the limits of physics this century and change will slow down dramatically, so whatever we settle on soon will greatly impact how the future shakes out.
Counter-argument is that we haven’t seen much popular moral progress, and it seems to me that there is far more to go here; our pace of tech development is outpacing moral development
Also, while institutions have a ton of power, they mostly seem like they are stuck in the past and hard to change; the institution which will impact the next thousand years probably doesn’t exist and it is not clear what it looks like.
arguments for the future: Essentially, that computing is just the beginning; if we survive this era then we’ll reach even more impactful tech, such as bio, nano, space, superluminal etc; new impactful institutions will arise that don’t depend too heavily on whatever we are doing today, or maybe we’ll be multiplanetary or in VR or whatever. Secondly, humans need to ‘catch up’ in moral development to our technological development and that just takes time and could easily stretch beyond 2100.
Overall I lean towards the present: tech is moving so fast now, faster than in any point in the past, and I see reasons for it to slow down by the end of the century. The slow pace of moral development idea pushes the hinginess into the future but I think the chance of surviving until then outweighs the changes in our morality and societal organization that I expect after that point. If I were certain we would survive another 100 years then I might be convinced that the future will be more hingey than the present.
Thanks for sharing!
To elaborate:
Using the expected value of 1/N, I estimated the prior probabilty of this being the most important century is less than 0.987 %, whereas Will’s formulation would have resulted into less than 10^-18 (much lower!).