Thoughts on whether we’re living at the most influential time in history

(thanks to Claire Zabel, Damon Binder, and Carl Shulman for suggesting some of the main arguments here, obviously all flaws are my own; thanks also to Richard Ngo, Greg Lewis, Kevin Liu, and Sidney Hough for helpful conversation and comments.)

Will MacAskill has a post, Are we living at the most influential time in history?, about what he calls the “hinge of history hypothesis” (HoH), which he defines as the claim that “we are living at the most influential time ever.” Whether HoH is true matters a lot for how longtermists should allocate their effort. In his post, Will argues that we should be pretty skeptical of HoH.

EDIT: Will recommends reading this revised article of his instead of his original post.

I appreciate Will’s clear description of HoH and its difference from strong longtermism, but I think his main argument against HoH is deeply flawed. The comment section of Will’s post contains a number of commenters making some of the same criticisms I’m going to make. I’m writing this post because I think the rebuttals can be phrased in some different, potentially clearer ways, and because I think that the weaknesses in Will’s argument should be more widely discussed.

Summary: I think Will’s arguments mostly lead to believing that you aren’t an “early human” (a human who lives on Earth before humanity colonizes the universe and flourishes for a long time) rather than believing that early humans aren’t hugely influential, so you conclude that either humanity doesn’t have a long future or that you probably live in a simulation.

I sometimes elide the distinction between the concepts of “x-risk” and “human extinction”, because it doesn’t matter much here and the abbreviation is nice.

(This post has a lot of very small numbers in it. I might have missed a zero or two somewhere.)

EDIT: Will’s new post

Will recommends reading this revised article of his instead of his original post. I believe that his new article doesn’t make the assumption about the probability of civilization lasting for a long time, which means that my criticism “This argument implies that the probability of extinction this century is almost certainly negligible” doesn’t apply to his new post, though it still applies to the EA Forum post I linked. I think that my other complaints are still right.

The outside-view argument

This is the argument that I have the main disagreement with.

Will describes what he calls the “outside-view argument” as follows:

1. It’s a priori extremely unlikely that we’re at the hinge of history

2. The belief that we’re at the hinge of history is fishy

3. Relative to such an extraordinary claim, the arguments that we’re at the hinge of history are not sufficiently extraordinarily powerful

Given 1, I agree with 2 and 3; my disagreement is with 1, so let’s talk about that. Will phrases his argument as:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/​n.

The unconditional prior probability over whether this is the most influential century would then depend on one’s priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that’s the claim we can focus our discussion on.

I have several disagreements with this argument.

This argument implies that the probability of extinction this century is almost certainly negligible

EDIT: This objection of mine applies to Will’s article as it was originally written, but he edited it after posting to change the argument so that my objection no longer applies.

One way that a century could be pivotal is that humanity could go extinct. So this argument suggests that our century shouldn’t have unusually high levels of x-risk.

If you assume, as Will does in his argument, that the probability of humanity lasting a trillion years is 0.01%, and you think that this century has a typical probability of extinction, then you find that the probability of human extinction this century is 0.00000009%, because (1 − 0.00000009%) ** (1 trillion years/​ one century) is about 0.01%.

0.00000009% is a very small probability. Will said during an EA Forum AMA that he puts the probability of x-risk over the next century at less than 1%, which I guess is consistent with this number, but if this is really what he thinks (which it probably isn’t), I think he should probably have included at least a few more of the six or so extra zeros.

It seems more likely to me that Will instead thinks that x-risk over the next century is something like 0.1%. You can do similar math to note that this argument implies that only one in a million centuries can have a probability of extinction higher than 0.07% (assuming all other centuries have exactly zero risk). It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that. (Thanks to Greg Lewis for pointing out this last argument to me.)

To me, uniform hingeyness instead implies no long futures

If I believed in uniform hingeyness the way Will is advocating, I’d probably reason about the future as follows:

“I don’t know how long humanity’s future is. But I do believe that it would be quite surprising if this century had radically higher x-risk levels than later centuries. So I assume that later centuries will have similar x-risk levels to this century. Currently, x-risks seem nontrivial; perhaps nukes alone give us 0.1% per century. Therefore, humanity is unlikely to last more than a thousand centuries.”

I come to this conclusion because I have more evidence about current levels of x-risk than I do about the probability of humanity having a very long future. It seems much more reasonable to make the inference “levels of x-risk are non-negligible, so uniform x-risk means no long future” than “humanity might have a very long future, so levels of x-risk are incredibly low”. Will’s argument as written is making the second inference.

The fishiness comes from the earliness, not the hinginess

I think that the “we shouldn’t be exceptional” prior is mostly an argument against us being early humans, rather than early times in civilization being hingey.

Here’s an argument that I think is reasonable:

  • Given a long future, the probability of any randomly chosen human living in a hingey time is very low.

  • It seems pretty reasonable to think that there are some hingey early times in human history.

  • We seem to find ourselves in an early part of human history, and a time where x-risk looks non-negligible and which therefore seems hingey. There is something weird about this.

How should we resolve this?

Will’s resolution is to say that in fact, we shouldn’t expect early times in human history to be hingey, because that would violate his strong prior that any time in human history is equally likely to be hingey.

It seems to me that in fact, most of the fishiness here is coming from us finding ourselves in an early time in human history, and that if you condition on us being at an early time in human history (which is an extremely strong condition, because it has incredibly low prior probability), it’s not that surprising for us to find ourselves at a hingey time.

Will thinks that the “early times are hingey” hypothesis needs to meet an incredibly high standard of evidence because it implies that we’re at an exceptional time. But this century doesn’t need to be hingey in order to be exceptional—it’s already extremely implausibly early. If something like 10% of all humans who have ever lived are alive today, then even if this century is the hingiest century, then being in the hingiest century is only 10x as surprising as being among the first 100 billion humans to be alive. So if you require incredible evidence to be convinced of hingeyness, it seems like you should require almost as incredible evidence to be convinced of earliness.

And so the main result of the “we probably aren’t at an exceptional time” assumption is that we’re almost surely not actually early humans. This is basically just the simulation argument, which has the conclusion that either the future isn’t big, we are in a simulation, or we are very very confused (eg because we’re dreaming or insane). (The simulation argument also includes the possibility that the future has no simulations of early intelligent life in it, which seems relatively implausible to me).

This result is also related to the doomsday argument, which argues that given we should be very surprised to be unusually early humans, there probably won’t be squillions more humans in the future. (This argument is only persuasive under some theories of anthropics.)

As Will notes, following Brian Tomasik and others, the simulation argument dampens enthusiasm for influencing the far future. But it does this almost entirely by updating us against thinking that we’re early humans, rather than thinking that people who are in fact early humans aren’t hugely influential.

“Early times in history are particularly hingey” is a priori reasonable

I think we should evaluate the “early times are hingey” hypothesis as if we didn’t know that we were (at least on the face of it) in that time; I think that “early times are hingey” is in fact pretty a priori plausible.

At the risk of making arguments based on object-level claims about the world instead of abstract generalities, you could note that:

  • Once upon a time, there were very few humans, and they weren’t very resilient to things like volcanic eruptions, and so they could have fairly easily died out.

  • In contrast, if humans colonize the universe, then most of Earth-originating civilization will occur in parts of space that will never again be able to interact with Earth, and so which won’t face threats that could cause complete destruction of civilization.

  • In general it seems like technology can be a positive and negative factor in global risk; before a certain tech level, no-one has technology that could destroy the world, and there are also hypothetical technologies that could reduce the risk of extinction to basically zero basically forever. It seems kind of reasonable to imagine that civilizations live for a long time if they manage to implement the anti-extinction technologies before they die from their dangerous technologies (or from natural risks).

Will’s argument suggests that all of these require an extremely high standard of evidence that they don’t meet; I think that they don’t; we should mostly be surprised not that early times look risky, but that we appear to be in one.

Incidentally, this outside view argument is also an argument against patient philanthropy

Suppose it was true that you could save money and have it grow exponentially over long time periods. Because investing then does more good if you do it earlier, this would mean that we had radically better opportunities to do good than most humans. So the outside view argument also thinks that it’s impossible to influence the future via investing.

One way of thinking about this is that investing for a long time is a mechanism by which you can turn earliness into importance, and so it’s incompatible with the belief that you’re early but not important.

I mention this because I sometimes hear the outside view argument used as an argument for patient philanthropy, which it in fact is not.

The inductive argument against HoH

I have milder objections to this argument, but I still thought it was worth writing about it.

Will argues that plausibly hinginess is steadily increasing and so we’re currently not at the hingiest time. This seems right to me; I suspect that eg the year where we build the first superintelligence is hingier than this year. But I claim that long term, x-risk needs to go to zero if we have much chance of an extremely long future. And the current increase in hinginess seems unsustainable, in that the increase in hinginess we’ve seen so far leads to x-risk probabilities that lead to drastic reduction of the value of worlds that last for eg a millennium at current hinginess levels.

So I think we should probably think that this decade isn’t the decade where the most important things happen, but (conditional on the future being big) it’s plausible enough that this is the hinge century that we should be actively trying to do specific object level things to make that most-important-decade go as well as it can.

I think Will disagrees with this because he already used the outside-view argument to update against current levels of x-risk being as high as they look.

There’s a version of this argument which I find pretty persuasive, which Will sort of makes and which Toby Ord makes at more length in The timing of labour aimed at reducing existential risk. I’ll quote Toby:

Suppose someone considers AI to be the largest source of existential risk, and so spends a decade working on approaches to make self-improving AI safer. It might later become clear that AI was not the most critical area to worry about, or that this part of AI was not the most critical part, or that this work was going to get done anyway by mainstream AI research, or that working on policy to regulate research on AI was more important than working on AI. In any of these cases she wasted some of the value of her work by doing it now. She couldn’t be faulted for lack of omniscience, but she could be faulted for making herself unnecessarily at the mercy of bad luck. She could have achieved more by doing her work later, when she had a better idea of what was the most important thing to do.

I think that there’s quite a strong case to be made that we will indeed be able to work more helpfully on AI risk when we’re closer to AGI being developed. And this suggests that we should spend more of our efforts on e.g. building an AI safety field that will be able to do lots of really good work in the years leading up to AGI, rather than trying to do useful research now; many AI safety researchers explicitly endorse this argument. But this strategy, where you identify concrete events that you want to influence that look like they might be a few decades away and you build up capabilities to influence them, is quite different from a strategy where you build up general capabilities without any particular plan for what to apply them to, perhaps for billions of years; I think it’s worth distinguishing these two proposals.

Conclusion

I think Will’s arguments mostly lead to believing that you’re not an early human, rather than believing that early humans aren’t hugely influential.

Throughout this post I only talked about x-risk as a source of hingeyness. Other potential sources of hingeyness don’t change the analysis much. They can do the following:

  • reduce the probability of x-risk forever, in which case they’re a mechanism by which early times might be hingey

  • make the world less valuable forever; for our purposes this is the same thing as x-risk

  • make the world more valuable forever; I think that you can basically just think of situations where the world might be made better forever as centuries where there’s a risk of losing large fractions of future value via the good thing not happening.

I think there’s interesting discussion to be had about how we should respond to the simulation argument, and whether we should be executing on object level plans or waiting until we’re better informed.