This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris.
Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible.
In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true.
The base rate argument
When we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion).
From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred million) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough.
The inductive argument
There is another strong reason to think our time is not the most influential, MacAskill argues:
Premise 1: Influentialness has been increasing over time.
Premise 2: We should expect this trend to continue.
Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness.
Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had the scientific knowledge that we have, they might have used it to pursue a worse moral view. Indeed, it is likely that future generations will discover ways in which we are misguided, both morally and scientifically. If we are mistaken enough, our (well-intentioned) present actions could actually be doing harm. Premise 2 indicates that we should expect the trend of improvement to continue. This is especially plausible because we can identify gaps in our scientific, technological and moral understanding. Overall, this argument indicates that we should expect future generations to be more influential than we are.
Reasons why our time might be unusual
MacAskill also discusses several reasons one might think that our time is unusual, and therefore may be unusually influential. Our time is unusual because we currently live on a single planet, while most people who will ever live will likely (in expectation) be part of an interplanetary civilisation. We also live at a time of extreme technological progress which cannot continue indefinitely: our current economic growth rate is around 3.5%, but 2% annual growth over the next 10,000 years would result in an economic output of 1019 (ten million trillion) times the current world GDP for every atom in the galaxy.
There are three important ways in which this could make our time unusually influential:
Our single planet is a single point of failure, which may make the risk of extinction temporarily higher than usual.
While we live on a single planet, the most influential people today may have an unusual ability to influence humanity as a whole – both because they can communicate near instantaneously with almost everyone and because their resources are a relatively large fraction of the total. If humanity becomes a much larger space-faring civilisation, these will likely both change.
Plausibly, the fate of the future will be decided by how we handle some particular technology (such as artificial intelligence or particularly dangerous new weapons) and we are more likely to discover such a technology during a period of rapid growth. 
However, each of these arguments has important caveats. In relation to the first argument, most people who are worried about existential risk believe that a large part of the risk comes from misaligned artificial intelligence, and this would not be significantly reduced by planetary diversification. In relation to the second argument, this period of unusual influence may be prolonged if our civilisation stays earthbound for thousands of years or it just takes longer than we expect to leave the solar system. (It only takes one hour for light to traverse the full diameter of the asteroid belt, so the ability of the most influential people to influence humanity as a whole may remain high for quite some time.) In relation to the third argument, perhaps this period of remarkable economic growth will last longer than most anticipate. Even if this period is short, one could argue that longtermists will be less influential during periods of high economic growth, because the unpredictability of a rapidly changing environment hinders the execution of very long-term projects. Overall, MacAskill thinks that these arguments provide evidence that our time may be the most influential. However, the base rate and inductive arguments show that we should be extremely sceptical that we live at the most important time – and the evidence presented in this section does not seem strong enough to overcome these arguments.
Overall, we probably do not live at the ‘hinge of history’. If we did, this would give us a powerful reason to spend now rather than investing to have a much larger impact later. Instead, the case for investment remains strong.
Daniel Benjamin et al. (2018) Redefine statistical significance. Nature Human Behaviour. 2.
Nick Bostrom (2014). Superintelligence: Path, Dangers, Strategies. Oxford University Press.
Hilary Greaves and William MacAskill (2021). The case for strong longtermism. GPI Working Paper No. 5-2021.
William MacAskill (2022). Are we living at the hinge of history? Ethics and Existence: The Legacy of Derek Parfit. Oxford University Press. Edited by Jeff McMahan, Tim Campbell, James Goodrich, and Ketan Ramakrishnan.
William MacAskill (2019). two-thirdsWhen should an effective altruist donate? GPI Working Paper No. 8-2019.
Toby Ord (2020). The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing.
Carl Sagan (1994). Pale Blue Dot: A Vision of the Human Future in Space. Random House.
Philip Trammell (2021). Dynamic public good provision under time preference heterogeneity: theory and applications to philanthropy. GPI Working Paper No. 9-2021.
See Greaves and MacAskill (2021), or the summary of their paper.
See MacAskill (2019) and Trammell (2021).
Of course, inflation decreases what you can buy with the same sum in the future, but here we are talking about real returns (which account for inflation), so you could buy what $17,000 would buy today.
Here the ‘Bayes’ factor is used as a measure of the strength of a piece of evidence. It is an exact mathematical denotation of how much rational beliefs should change in response to that evidence. The Bayes factor required to move from 1 in 100 million to 1 in 10 would be 10 million (because 1/10=10 million/100 million). Under plausible assumptions, the Bayes factor of a randomised controlled trial with a p-value of 0.05 is approximately 3 (Benjamin et al, 2018, p. 7), so we would need about 3 million times as much evidence.
One might try to defend a more modest position, instead claiming that this is just one enormously influential time (rather than the most influential), or that it is only the most influential relative to times we can plausibly pass resources to (the next thousand years or so). Indeed, these claims would require less strong evidence to defend, but we also have less evidence to defend them.
They would have likely believed that non-male, non-white or non-Christian people were less valuable, that strong social hierarchy and slavery were natural and that homosexuality and premarital sex were deeply immoral.
Even if our prospects for becoming an interplanetary civilisation were low, most future people would be part of one (in expectation). This is because an interplanetary civilisation could be very large – there could be many planets with the population of Earth, and they could sustain life much longer.
See Sagan (1994) and Ord (2020).
See Bostrom (2014) and Ord (2020).
Toby Ord (2020) estimates that two thirds of the total risk this century comes from misaligned AI.