Are we living at the most influential time in history?

I don’t claim origi­nal­ity for any con­tent here; peo­ple who’ve been in­fluen­tial on this in­clude Nick Beck­stead, Phil Tram­mell, Toby Ord, Aron Val­lin­der, Allan Dafoe, Matt Wage, and, es­pe­cially, Holden Karnofsky and Carl Shul­man. Every­thing ten­ta­tive; er­rors all my own.


Here are two dis­tinct views:

Strong Longter­mism := The pri­mary de­ter­mi­nant of the value of our ac­tions is the effects of those ac­tions on the very long-run fu­ture.
The Hinge of His­tory Hy­poth­e­sis (HoH) := We are liv­ing at the most in­fluen­tial time ever.

It seems that, in the effec­tive al­tru­ism com­mu­nity as it cur­rently stands, those who be­lieve longter­mism gen­er­ally also as­sign sig­nifi­cant cre­dence to HoH; I’ll pre­cisify ‘sig­nifi­cant’ as >10% when ‘time’ is used to re­fer to a pe­riod of a cen­tury, but my im­pres­sion is that many longter­mists I know would as­sign >30% cre­dence to this view. It’s a pretty strik­ing fact that these two views are so of­ten held to­gether — they are very differ­ent claims, and it’s not ob­vi­ous why they should so of­ten be jointly en­dorsed.

This post is about sep­a­rat­ing out these two views and in­tro­duc­ing a view I call out­side-view longter­mism, which en­dorses longter­mism but finds HoH very un­likely. I won’t define out­side-view longter­mism here, but the spirit is that — as our best guess — we should ex­pect the fu­ture to con­tinue the trends of the past, and we should be scep­ti­cal of the idea that now is a par­tic­u­larly un­usual time. I think that out­side-view longter­mism is cur­rently a ne­glected po­si­tion within EA and de­serves some defense and ex­plo­ra­tion.

Be­fore we be­gin, I’ll note I’m not mak­ing any im­me­di­ate claim about the ac­tions that fol­low from out­side-view longter­mism. It’s plau­si­ble to me that whether we have 30% or just 0.1% cre­dence in HoH, we should still be in­vest­ing sig­nifi­cant re­sources into the ac­tivi­ties that would be best were HoH true. The most ob­vi­ous im­pli­ca­tion, how­ever, is re­gard­ing what pro­por­tion of re­sources longter­mist EAs should be spend­ing on near-term ex­is­ten­tial risk miti­ga­tion ver­sus what I call ‘buck-pass­ing’ strate­gies like sav­ing or move­ment-build­ing. If you think that some fu­ture time will be much more in­fluen­tial than to­day, then a nat­u­ral strat­egy is to en­sure that fu­ture de­ci­sion-mak­ers, who you are happy to defer to, have as many re­sources as pos­si­ble when some fu­ture, more in­fluen­tial, time comes. So in what fol­lows I’ll some­times use this as the com­par­i­son ac­tivity.

Get­ting the defi­ni­tions down

We’ve defined strong longter­mism in­for­mally above and in more de­tail in this post.

For HoH, defin­ing ‘most in­fluen­tial time’ is pretty cru­cial. Here’s my pro­posal:

a time ti is more in­fluen­tial (from a longter­mist per­spec­tive) than a time tj iff you would pre­fer to give an ad­di­tional unit of re­sources,[1] that has to be spent do­ing di­rect work (rather than in­vest­ment), to a longter­mist al­tru­ist liv­ing at ti rather than to a longter­mist al­tru­ist liv­ing at tj.

(I’ll also use the term ‘hingier’ to be syn­ony­mous with ‘more in­fluen­tial’.)

This defi­ni­tion gets to the nub of the mat­ter, for me. It seems to me that, for most times in hu­man his­tory, longter­mists ought, if they could, to have been in­vest­ing their re­sources (via val­ues-spread­ing as well as literal in­vest­ment) in or­der that they have greater in­fluence at hingey mo­ments when one’s abil­ity to in­fluence the long-run fu­ture is high. It’s a cru­cial ques­tion for longter­mists whether now is a very hingey mo­ment, and so whether they should be in­vest­ing or do­ing di­rect work.

It’s sig­nifi­cant that my defi­ni­tion fo­cuses on how much in­fluence a per­son at a time can have, rather than how much in­fluence oc­curs dur­ing a time pe­riod. It could be the case, for ex­am­ple, that the 20th cen­tury was a big­ger deal than the 17th cen­tury, but that, be­cause there were 1/​5th as many peo­ple al­ive dur­ing the 17th cen­tury, a longter­mist al­tru­ist could have had more di­rect im­pact in the 17th cen­tury than in the 20th cen­tury.

It’s also sig­nifi­cant that, on this defi­ni­tion, you need to take into ac­count the level of knowl­edge and un­der­stand­ing of the av­er­age longter­mist al­tru­ist at the time. This seems right to me. For ex­am­ple, hunter-gath­er­ers could con­tribute more to tech speed-up than peo­ple now (see Carl Shul­man’s post here); but they wouldn’t have known, or been in a po­si­tion to know, that try­ing to in­no­vate was a good way to benefit the very long-run fu­ture. (In that post, Carl men­tions some rea­sons for think­ing that such im­pact was know­able, but prior to the 17th cen­tury peo­ple didn’t even have the con­cept of ex­pected value, so I’m cur­rently scep­ti­cal.)

So I’m re­ally bundling two differ­ent ideas into the con­cept of ‘most in­fluen­tial’: how pivotal a par­tic­u­lar mo­ment in time is, and how much we’re able to do some­thing about that fact. Per­haps we’re at a re­ally trans­for­ma­tive mo­ment now, and we can, in prin­ci­ple, do some­thing about it, but we’re so bad at pre­dict­ing the con­se­quences of our ac­tions, or so clue­less about what the right val­ues are, that it would be bet­ter for us to save our re­sources and give them to fu­ture longter­mists who have greater knowl­edge and are bet­ter able to use their re­sources, even at that less pivotal mo­ment. If this were true, I would not count this time as be­ing ex­cep­tion­ally in­fluen­tial.

Strong longter­mism even if HoH is not true

I men­tioned that it’s sur­pris­ing that strong longter­mism and sig­nifi­cant cre­dence in HoH are so of­ten held to­gether. But here’s one rea­son why you might think you should put sig­nifi­cant cre­dence in HoH iff you be­lieve longter­mism: You might ac­cept that most value is in the long-run fu­ture, but think that, at most times in his­tory so far, we’ve been un­able to do any­thing about that value. So it’s only be­cause HoH is true that longter­mism is true. But I don’t think that’s a good ar­gu­ment, for a few rea­sons.

First, given the stakes in­volved, it’s plau­si­ble that even a small chance of be­ing at a pe­riod of un­usu­ally high ex­tinc­tion or lock-in risk is enough for work­ing on ex­tinc­tion risk or lock-in sce­nar­ios to be higher ex­pected value than short-run ac­tivi­ties. So, you can rea­son­ably think that (i) HoH is un­likely (e.g. 0.1% likely), but that (ii) when com­bined with the value of be­ing able to in­fluence the value of the long-run fu­ture, a small chance of HoH be­ing true is enough to make strong longter­mism true.

Se­cond, even if we’re merely at a rel­a­tively hingey time — just not the most hingey time — as long as there are some ac­tions that have per­sis­tent long-run effects that are pos­i­tive in ex­pected value, that’s plau­si­bly suffi­cient for strong longter­mism to be true.

Third, you could even be cer­tain that HoH is false, and that there are cur­rently no di­rect ac­tivi­ties with per­sis­tent im­pacts, but still be­lieve that longter­mism is true if, as is nat­u­ral to sup­pose, you have the op­tion of in­vest­ing re­sources, en­abling fu­ture longter­mist al­tru­ists to take ac­tion at a time which is more in­fluen­tial.

Ar­gu­ments for HoH

In this post, I’m go­ing to sim­ply state, but not dis­cuss, some views on which some­thing like HoH would be en­tailed, and some ar­gu­ments for think­ing HoH is likely. Each of these views and ar­gu­ments re­quire a lot more dis­cus­sion, and of­ten have had a lot more dis­cus­sion el­se­where.

There are two com­monly held views that en­tail some­thing like HoH:

The Value Lock-in view

Most starkly, ac­cord­ing to a view re­gard­ing AI risk most closely as­so­ci­ated with Nick Bostrom and Eliezer Yud­kowsky: it’s likely that we will de­velop AGI this cen­tury, and it’s likely that AGI will quickly tran­si­tion to su­per­in­tel­li­gence. How we han­dle that tran­si­tion de­ter­mines how the en­tire fu­ture of civil­i­sa­tion goes: if the su­per­in­tel­li­gence ‘wins’, then the en­tire fu­ture of civil­i­sa­tion is de­ter­mined in ac­cord with the su­per­in­tel­li­gence’s goals; if hu­man­ity ‘wins’, then the en­tire fu­ture of civil­i­sa­tion is de­ter­mined in ac­cord with who­ever con­trols the su­per­in­tel­li­gence, which could be ev­ery­one, or could be a small group of peo­ple. If this story is right, and we can in­fluence which of these sce­nar­ios oc­curs, then this cen­tury is the most in­fluen­tial time ever.

A re­lated, but more gen­eral, ar­gu­ment, is that the most pivotal point in time is when we de­velop tech­niques for en­g­ineer­ing the mo­ti­va­tions and val­ues of the sub­se­quent gen­er­a­tion (such as through AI, but also per­haps through other tech­nol­ogy, such as ge­netic en­g­ineer­ing or ad­vanced brain­wash­ing tech­nol­ogy), and that we’re close to that point. (H/​T Carl Shul­man for stat­ing this more gen­eral view to me).

The Time of Per­ils view

Ac­cord­ing to the Time of Per­ils view, we live in a pe­riod of un­usu­ally high ex­tinc­tion risk, where we have the tech­nolog­i­cal power to de­stroy our­selves but lack the wis­dom to be able to en­sure we don’t; af­ter this point an­nual ex­tinc­tion risk will go to some very low level. Sup­port for this view could come from both out­side-view and in­side-view rea­son­ing: the out­side-view ar­gu­ment would claim that ex­tinc­tion risk has been un­usu­ally high since the ad­vent of nu­clear weapons; the in­side-view ar­gu­ment would point to ex­tinc­tion risk from forth­com­ing tech­nolo­gies like syn­thetic biol­ogy.

The ‘un­usual’ is im­por­tant here. Per­haps ex­tinc­tion risk is high at this time, but will be even higher at some fu­ture times. In which case those fu­ture times might be even hingier than to­day. Or per­haps ex­tinc­tion risk is high, but will stay high in­definitely, in which case the fu­ture is not huge in ex­pec­ta­tion, and the grounds for strong longter­mism fall away.

And, for the Time of Per­ils view to re­ally sup­port HoH, it’s not quite enough to show that ex­tinc­tion risk is un­usu­ally high; what’s needed is that ex­tinc­tion risk miti­ga­tion efforts are un­usu­ally cost-effec­tive. So part of the view must be not only that ex­tinc­tion risk is un­usu­ally high at this time, but also that longter­mist al­tru­ists are un­usu­ally well-placed to de­crease those risks — per­haps be­cause ex­tinc­tion risk re­duc­tion is un­usu­ally ne­glected.

Out­side-View Arguments

The Value Lock-In and Time of Per­ils views are the ma­jor views on which HoH — or some­thing similar — would be sup­ported. But there are also a num­ber of more gen­eral, and more out­side-view-y, ar­gu­ments that might be taken as ev­i­dence in favour of HoH:

  1. That we’re un­usu­ally early on in hu­man his­tory, and ear­lier gen­er­a­tions in gen­eral have the abil­ity to in­fluence the val­ues and mo­ti­va­tions of later gen­er­a­tions.[2]

  2. That we’re at an un­usu­ally high pe­riod of eco­nomic and tech­nolog­i­cal growth.

  3. That the long-run trend of eco­nomic growth means we should ex­pect ex­tremely rapid growth into the near fu­ture, such that we should ex­pect to hit the point of fastest-ever growth fairly soon, be­fore slow­ing down.

  4. That we’re un­usu­ally well-con­nected and able to co­op­er­ate in virtue of be­ing on one planet.

  5. That we’re un­usu­ally likely to be­come ex­tinct in virtue of be­ing on one planet.

My view is that, in the ag­gre­gate, these out­side-view ar­gu­ments should sub­stan­tially up­date one from one’s prior to­wards HoH, but not all the way to sig­nifi­cant cre­dence in HoH.[3]

Ar­gu­ments against HoH

#1: The out­side-view ar­gu­ment against HoH

In­for­mally, the core ar­gu­ment against HoH is that, in try­ing to figure out when the most in­fluen­tial time is, we should con­sider all of the po­ten­tial billions of years through which civil­i­sa­tion might ex­ist. Out of all those years, there is just one time that is the most in­fluen­tial. Ac­cord­ing to HoH, that time is… right now. If true, that would seem like an ex­traor­di­nary co­in­ci­dence, which should make us sus­pi­cious of what­ever rea­son­ing led us to that con­clu­sion, and which we should be loath to ac­cept with­out ex­traor­di­nary ev­i­dence in its favour. We don’t have such ex­traor­di­nary ev­i­dence in its favour. So we shouldn’t be­lieve in HoH.

I’ll take each of the key claims in this ar­gu­ment in turn:

  1. It’s a pri­ori ex­tremely un­likely that we’re at the hinge of history

  2. The be­lief that we’re at the hinge of his­tory is fishy

  3. Rel­a­tive to such an ex­traor­di­nary claim, the ar­gu­ments that we’re at the hinge of his­tory are not suffi­ciently ex­traor­di­nar­ily pow­er­ful

Claim 1

That HoH is a pri­ori un­likely should be pretty ob­vi­ous. It’s hard to know ex­actly what ur-prior to use for this claim, though. One nat­u­ral thought is that we could use, say, 1 trillion years’ time as an early es­ti­mate for the ‘end of time’ (due to the last nat­u­rally oc­cur­ring star for­ma­tion), and a 0.01% chance of civil­i­sa­tion sur­viv­ing that long. Then, as a lower bound, there are an ex­pected 1 mil­lion cen­turies to come, and the nat­u­ral prior on the claim that we’re in the most in­fluen­tial cen­tury ever is 1 in 1 mil­lion. This would be too low in one im­por­tant way, namely that the num­ber of fu­ture peo­ple is de­creas­ing ev­ery cen­tury, so it’s much less likely that the fi­nal cen­tury will be more in­fluen­tial than the first cen­tury. But even if we re­stricted our­selves to a uniform prior over the first 10% of civil­i­sa­tion’s his­tory, the prior would still be as low as 1 in 100,000.

(This is a very rough ar­gu­ment. I re­ally don’t know what the right ur-prior is to set here, and I’d be keen to see fur­ther dis­cus­sion, as it po­ten­tially changes one’s pos­te­rior on HoH by an awful lot.)

[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mis­taken, and there­fore caused some con­fu­sion. The way I should have stated the prior choice, to rep­re­sent what I was think­ing of, is as fol­lows:

The prior prob­a­bil­ity of us liv­ing in the most in­fluen­tial cen­tury, con­di­tional on Earth-origi­nat­ing civ­i­liza­tion last­ing for n cen­turies, is 1/​n.

The un­con­di­tional prior prob­a­bil­ity over whether this is the most in­fluen­tial cen­tury would then de­pend on one’s pri­ors over how long Earth-origi­nat­ing civ­i­liza­tion will last for. How­ever, for the pur­pose of this dis­cus­sion we can fo­cus on just the claim that we are at the most in­fluen­tial cen­tury AND that we have an enor­mous fu­ture ahead of us. If the Value Lock-In or Time of Per­ils views are true, then we should as­sign a sig­nifi­cant prob­a­bil­ity to that claim. (i.e. they are claiming that, if we act wisely this cen­tury, then this con­junc­tive claim is prob­a­bly true.) So that’s the claim we can fo­cus our dis­cus­sion on.

It’s worth not­ing that my pro­posal fol­lows from the Self-Sam­pling As­sump­tion, which is roughly (as stated by Teru Thomas (‘Self-lo­ca­tion and ob­jec­tive chance’ (ms)): “A ra­tio­nal agent’s pri­ors lo­cate him uniformly at ran­dom within each pos­si­ble world.” I be­lieve that SSA is widely held: the key ques­tion in the an­thropic rea­son­ing liter­a­ture is whether it should be sup­ple­mented with the self-in­di­ca­tion as­sump­tion (giv­ing greater prior prob­a­bil­ity mass to wor­lds with large pop­u­la­tions). But we don’t need to de­bate SIA in this dis­cus­sion, be­cause we can sim­ply as­sume some prior prob­a­bil­ity dis­tri­bu­tion over sizes over the to­tal pop­u­la­tion—the ques­tion of whether we’re at the most in­fluen­tial time does not re­quire us to get into de­bates over an­throp­ics.]

Claim 2

Lots of things are a pri­ori ex­tremely un­likely yet we should have high cre­dence in them: for ex­am­ple, the chance that you just dealt this par­tic­u­lar (ran­dom-seem­ing) se­quence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should of­ten have high cre­dence in claims of that form. But the claim that we’re at an ex­tremely spe­cial time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect or­der (2 to Ace of clubs, then 2 to Ace of di­a­monds, etc) from a well-shuffled deck of cards.

Be­ing fishy is differ­ent than just be­ing un­likely. The differ­ence be­tween un­like­li­hood and fish­i­ness is the availa­bil­ity of al­ter­na­tive, not wildly im­prob­a­ble, al­ter­na­tive hy­pothe­ses, on which the out­come or ev­i­dence is rea­son­ably likely. If I deal the ran­dom-seem­ing se­quence of cards, I don’t have rea­son to ques­tion my as­sump­tion that the deck was shuffled, be­cause there’s no al­ter­na­tive back­ground as­sump­tion on which the ran­dom-seem­ing se­quence is a likely oc­cur­rence. If, how­ever, I deal the deck of cards in perfect or­der, I do have rea­son to sig­nifi­cantly up­date that the deck was not in fact shuffled, be­cause the prob­a­bil­ity of get­ting cards in perfect or­der if the cards were not shuffled is rea­son­ably high. That is: P(cards not shuffled)P(cards in perfect or­der | cards not shuffled) >> P(cards shuffled)P(cards in perfect or­der | cards shuffled), even if my prior cre­dence was that P(cards shuffled) > P(cards not shuffled), so I should up­date to­wards the cards hav­ing not been shuffled.

Similarly, if it seems to me that I’m liv­ing in the most in­fluen­tial time ever, this gives me good rea­son to sus­pect that the rea­son­ing pro­cess that led me to this con­clu­sion is flawed in some way, be­cause P(I’m rea­son­ing poorly)P(seems like I’m liv­ing at the hinge of his­tory | I’m rea­son­ing poorly) >> P(I’m rea­son­ing cor­rectly)P(seems like I’m liv­ing at the hinge of his­tory | I’m rea­son­ing cor­rectly). In con­trast, I wouldn’t have the same rea­son to doubt my un­der­ly­ing as­sump­tions if I con­cluded that I was liv­ing in the 1047th most in­fluen­tial cen­tury.

The strength of this ar­gu­ment de­pends in part on how con­fi­dent we are on our own rea­son­ing abil­ities in this do­main. But it seems to me there’s a strong risk of bias in our as­sess­ment of the ev­i­dence re­gard­ing how in­fluen­tial our time is, for a few rea­sons:

  • Salience. It’s much eas­ier to see the im­por­tance of what’s hap­pen­ing around us now, which we can see and is salient to us, than it is to as­sess the im­por­tance of events in the fu­ture, in­volv­ing tech­nolo­gies and in­sti­tu­tions that are un­known to us to­day, or (to a lesser ex­tent) the im­por­tance of events in the past, which we take for granted and in­volve un­salient and un­fa­mil­iar so­cial set­tings.

  • Con­fir­ma­tion. For those of us, like my­self, who would very much like for the world to be tak­ing much stronger ac­tion on ex­tinc­tion risk miti­ga­tion (even if the prob­a­bil­ity of ex­tinc­tion is low) than it is to­day, it would be a good out­come if peo­ple (who do not have longter­mist val­ues) think that the risk of ex­tinc­tion is high, even if it’s low. So we might be bi­ased (sub­con­sciously) to over­state the case in our favour. And, in gen­eral, peo­ple have a ten­dency to­wards con­fir­ma­tion bias: once they have a con­clu­sion (“we should take ex­tinc­tion risk a lot more se­ri­ously”), they tend to mar­shall ar­gu­ments in its favour, rather than care­fully as­sess ar­gu­ments on ei­ther side, more than they should. Though we try our best to avoid such bi­ases, it’s very hard to over­come them.

  • Track record. Peo­ple have a poor track record of as­sess­ing the im­por­tance of his­tor­i­cal de­vel­op­ments. And in par­tic­u­lar, it seems to me, tech­nolog­i­cal ad­vances are of­ten widely re­garded as be­ing more dan­ger­ous than they are. Some ex­am­ples in­clude as­sess­ment of risks from nu­clear power, horse ma­nure from horse-drawn carts, GMOs, the bi­cy­cle, the train, and many mod­ern drugs.[4]

I don’t like putting weight on bi­ases as a way of dis­miss­ing an ar­gu­ment out­right (Scott Alexan­der gives a good run-down of rea­sons why here). But be­ing aware that long-term fore­cast­ing is an area that’s very difficult to rea­son cor­rectly about should make us quite cau­tious when up­dat­ing from our prior.

If you ac­cept you should have a very low prior in HoH, you need to be very con­fi­dent that you’re good at rea­son­ing about the long-run sig­nifi­cance of events (such as the mag­ni­tude of risk from some new tech­nol­ogy) in or­der to have a sig­nifi­cant pos­te­rior cre­dence in HoH, rather than con­clud­ing we’re mis­taken in some way. But we have no rea­son to be­lieve that we’re very re­li­able in our rea­son­ing in these mat­ters. We don’t have a good track record of mak­ing pre­dic­tions about the im­por­tance of his­tor­i­cal events, and some track record of be­ing badly wrong. So, if a chain of rea­son­ing leads us to the con­clu­sion that we’re liv­ing in the most im­por­tant cen­tury ever, we should think it more likely that our rea­son­ing has gone wrong than that the con­clu­sion re­ally is true. Given the low base rate, and given our faulty tools for as­sess­ing the claim, the ev­i­dence in favour of HoH is al­most cer­tainly a false pos­i­tive.

Claim 3

I’ve de­scribed some of the ar­gu­ments for think­ing that we’re at an un­usu­ally in­fluen­tial time in the pre­vi­ous sec­tion above.

I won’t dis­cuss the ob­ject-level of these ar­gu­ments here, but it seems hard to see how these ar­gu­ments could be strong enough to move us from the very low prior all the way to sig­nifi­cant cre­dence in HoH. To illus­trate: a ran­domised con­trol­led trial with a p-value of 0.05, un­der cer­tain rea­son­able as­sump­tions, cor­re­sponds to a Bayes fac­tor of around 3; a Bayes fac­tor of 100 is re­garded as ‘de­ci­sive’ ev­i­dence. In or­der to move from a prior of 1 in 100,000 to a pos­te­rior of 1 in 10, one would need a Bayes fac­tor of 10,000 — ex­traor­di­nar­ily strong ev­i­dence.

But, so this ar­gu­ment goes, the ev­i­dence we have for ei­ther the Value Lock-in view or the Time of Per­ils view are in­for­mal ar­gu­ments. They aren’t based on data (be­cause they gen­er­ally con­cern fu­ture events) nor, in gen­eral, are they based on trend ex­trap­o­la­tion, nor are they based on very well-un­der­stood un­der­ly­ing mechanisms, such as phys­i­cal mechanisms. And the range of deep crit­i­cal en­gage­ment with those in­for­mal ar­gu­ments, es­pe­cially from ‘ex­ter­nal’ crit­ics, has, so far, been limited. So it’s hard to see why we should give them much more ev­i­den­tial weight than, say, a well-done RCT with a p-value at 0.05 — let alone as­sign them an ev­i­den­tial weight 3000 times that amount.

An al­ter­na­tive path to the same con­clu­sion is as fol­lows. Sup­pose that, if we’re at the hinge of his­tory, we’d cer­tainly have seem­ing ev­i­dence that we’re at the hinge of his­tory; so say that P(E | HoH ) ≈ 1. But if we weren’t at the hinge of his­tory, what would be the chances of us see­ing seem­ing ev­i­dence that we are at the hinge of his­tory? It’s not as­tro­nom­i­cally low; per­haps P(E | ¬HoH ) ≈ 0.01. (This would seem rea­son­able to be­lieve if we found just one cen­tury in the past 10,000 years where peo­ple would have had strong-seem­ing ev­i­dence in favour of the idea that they were at the hinge of his­tory. This seems con­ser­va­tive. Con­sider: the pe­ri­ods of the birth of Christ and early Chris­ti­an­ity; the times of Moses, Mo­hammed, Bud­dha and other re­li­gious lead­ers; the Re­for­ma­tion; the colo­nial pe­riod; the start of the in­dus­trial rev­olu­tion; the two world wars and the defeat of fas­cism; and countless other events that would have seemed mo­men­tous at the time but have since been for­got­ten in the sands of his­tory. Th­ese might have all seemed like good ev­i­dence to the ob­servers at the time that they were liv­ing at the hinge of his­tory, had they thought about it.) But, if so, then our Bayes fac­tor is 100 (or less): enough to push us from 1 in 100,000 to 1 in 1000 in HoH, but not all the way to sig­nifi­cant cre­dence.

#2: The In­duc­tive Ar­gu­ment against HoH

In ad­di­tion to the pre­vi­ous ar­gu­ment, which re­lies on pri­ors and claims we shouldn’t move dras­ti­cally far from those pri­ors, there’s a pos­i­tive ar­gu­ment against HoH, which gives us ev­i­dence against HoH, what­ever our pri­ors. This ar­gu­ment is based on in­duc­tion from past times.

If, when look­ing into the past, we saw hing­i­ness steadily de­crease, that would be a good rea­son for think­ing that now is hingier than all times to come, and so we should take ac­tion now rather than pass re­sources on to fu­ture longter­mists. If we had seen hing­i­ness steadily in­crease, then we have some rea­son for think­ing that the hingiest times are yet to come; if we had a good un­der­stand­ing of the mechanism of why hing­i­ness is in­creas­ing, and knew that mechanism was set to con­tinue into the fu­ture, that would strengthen that ar­gu­ment fur­ther.

I sug­gest that in the past, we have seen hing­i­ness in­crease. I think that most longter­mists I know would pre­fer that some­one liv­ing in 1600 passed re­sources onto us, to­day, rather than at­tempt­ing di­rect longter­mist in­fluence. (I cer­tainly would pre­fer this.) One rea­son for think­ing this would be if one thinks that now is sim­ply a more pivotal point in time, be­cause of our cur­rent level of tech­nolog­i­cal progress. How­ever, the stronger rea­son, it seems to me, is that our knowl­edge has in­creased so con­sid­er­ably since then. (Re­call that on my defi­ni­tion a par­tic­u­larly hingey time de­pends both on how pivotal the pe­riod in his­tory is and the ex­tent to which a longter­mist at the time would know enough to do some­thing about it.) Some­one in 1600 couldn’t have had knowl­edge of AI, or pop­u­la­tion ethics, or the length of time that hu­man­ity might con­tinue for, or of ex­pected util­ity the­ory, or of good fore­cast­ing prac­tices; they would have had no clue about how to pos­i­tively in­fluence the long-run fu­ture, and might well have done harm. Much the same is true of some­one in 1900 (though they would have had ac­cess to some of those con­cepts). It’s even true of some­one in 1990, be­fore peo­ple be­came aware of risks around AI. So, in gen­eral, hing­i­ness is in­creas­ing, be­cause our abil­ity to think about the long-run effects of our ac­tions, eval­u­ate them, and pri­ori­tise ac­cord­ingly, is in­creas­ing.

But we know that we aren’t any­where close to hav­ing fully worked out how to think about the long-run effects of our ac­tions, eval­u­ate them, and pri­ori­tise ac­cord­ingly. We should con­fi­dently ex­pect that in the fu­ture we will come across new cru­cial con­sid­er­a­tions — as se­ri­ous as the idea of pop­u­la­tion ethics, or AI risk — or ma­jor re­vi­sions of our views. So, just as we think that peo­ple in the past should have passed re­sources onto us rather than do di­rect work, so, this ar­gu­ment goes, we should pass re­sources into the fu­ture rather than do di­rect longter­mist work. We should think, in virtue of fu­ture peo­ple’s far bet­ter epistemic state, that some fu­ture time is more in­fluen­tial.

There are at least three ways in which our knowl­edge is chang­ing or im­prov­ing over time, and it’s worth dis­t­in­guish­ing them:

  1. Our ba­sic sci­en­tific and tech­nolog­i­cal un­der­stand­ing, in­clud­ing our abil­ity to turn re­sources into things we want.

  2. Our so­cial sci­ence un­der­stand­ing, in­clud­ing our abil­ity to make pre­dic­tions about the ex­pected long-run effects of our ac­tions.

  3. Our val­ues.

It’s clear that we are im­prov­ing on (1) and (2). All other things be­ing equal, this gives us rea­son to give re­sources to fu­ture peo­ple to use rather than to use those re­sources now. The im­por­tance of this, it seems to me, is very great. Even just a few decades ago, a longter­mist al­tru­ist would not have thought of risk from AI or syn­thetic biol­ogy, and wouldn’t have known that they could have taken ac­tion on them. Even now, the sci­ence of good fore­cast­ing prac­tices is still in its in­fancy, and the study of how to make re­li­able long-term fore­casts is al­most nonex­is­tent.

It’s more con­tentious whether we’re im­prov­ing on (3) — for this ar­gu­ment one’s meta-ethics be­comes cru­cial. Per­haps the Vic­to­ri­ans would have had a very poor un­der­stand­ing of how to im­prove the long-run fu­ture by the lights of their own val­ues, but they would have still preferred to do that than to pass re­sources onto fu­ture peo­ple, who would have done a bet­ter job of shap­ing the long-run fu­ture but in line with a differ­ent set of val­ues. So if you en­dorse a sim­ple sub­jec­tivist view, you might think that even in such an epistem­i­cally im­pov­er­ished state you should still pre­fer to act now rather than pass the ba­ton on to fu­ture gen­er­a­tions with aims very differ­ent from yours (and even then you might still want to save money in a Vic­to­rian-val­ues foun­da­tion to grant out at a later date). This view also makes the a pri­ori un­like­li­hood of liv­ing at the hinge of his­tory much less: from the per­spec­tive of your idiosyn­cratic val­ues, now is the only time that they are in­stan­ti­ated in phys­i­cal form, so of course this time is im­por­tant!

In con­trast, if you are more sym­pa­thetic to moral re­al­ism (or a more so­phis­ti­cated form of sub­jec­tivism), as I am, then you’ll prob­a­bly be more sym­pa­thetic to the idea that fu­ture peo­ple will have a bet­ter un­der­stand­ing of what’s of value than you do now, and this gives an­other rea­son for pass­ing the ba­ton on to fu­ture gen­er­a­tions. For just some ways in which we should ex­pect moral progress: Pop­u­la­tion ethics was first in­tro­duced as a field of en­quiry in the 1980s (with Parfit’s Rea­sons and Per­sons); in­finite ethics was only first se­ri­ously dis­cussed in moral philos­o­phy in the early 1990s (e.g. Vallen­tyne’s Utili­tar­i­anism and In­finite Utility), and it’s clear we don’t know what the right an­swers are; moral un­cer­tainty was only first dis­cussed in mod­ern times in 2000 (with Lock­hart’s Mo­ral Uncer­tainty and its Con­se­quences) and had very lit­tle at­ten­tion un­til around the 2010s (with An­drew Sepielli’s PhD and then my DPhil), and again we’ve only just scraped the sur­face of our un­der­stand­ing of it.

So, just as we think that the in­tel­lec­tual im­pov­er­ish­ment of the Vic­to­ri­ans means they would have done a ter­rible job of try­ing to pos­i­tively in­fluence the long-run fu­ture, we should think that, com­pared to fu­ture peo­ple, we are thrash­ing around in ig­no­rance. In which case we don’t have the level of un­der­stand­ing re­quired for ours to be the most in­fluen­tial time.

#3: The simu­la­tion up­date ar­gu­ment against HoH

The fi­nal ar­gu­ment[5] is:

  1. If it seems to you that you’re at the most in­fluen­tial time ever, you’re differ­en­tially much more likely to be in a simu­la­tion. (That is: P(simu­la­tion | seems like HoH ) >> P(not-simu­la­tion | seems like HoH).)

  2. The case for fo­cus­ing on AI safety and ex­is­ten­tial risk re­duc­tion is much weaker if you live in a simu­la­tion than if you don’t. (In gen­eral, I’d aver that we have very lit­tle un­der­stand­ing of the best things to do if we’re in a simu­la­tion, though there’s a lot more to be said here.)

  3. So we should not make a ma­jor up­date in the most ac­tion-rele­vant propo­si­tion, which is that we’re both at the hinge of his­tory and not in a simu­la­tion.

The pri­mary rea­son for be­liev­ing (1) is that the most in­fluen­tial time in his­tory would seem likely to be a very com­mon sub­ject of study by our de­scen­dents, and much more com­mon than other pe­ri­ods in time. (Just as cru­cial pe­ri­ods in time, like the in­dus­trial rev­olu­tion, get vastly more study by aca­demics to­day than less pivotal pe­ri­ods, like 4th cen­tury In­done­sia.) The pri­mary rea­sons for be­liev­ing (2) are that if we’re in a simu­la­tion it’s much more likely that the fu­ture is short, and that ex­tend­ing our fu­ture doesn’t change the to­tal amount of lived ex­pe­riences (be­cause the simu­la­tors will just run some other simu­la­tion af­ter­wards), and that we’re miss­ing some cru­cial con­sid­er­a­tion around how to act.

This ar­gu­ment is re­ally just a spe­cial case of ar­gu­ment #1: if it seems like you’re at the most in­fluen­tial point in time ever, prob­a­bly some­thing funny is go­ing on. The simu­la­tion idea is just one way of spel­ling out ‘some­thing funny go­ing on’. I’m per­son­ally ret­i­cent to make ma­jor up­dates in the di­rec­tion of liv­ing in a simu­la­tion on the ba­sis of this rather than up­dates to more ba­nal hy­pothe­ses like just some in­side-view ar­gu­ments not ac­tu­ally be­ing very strong; but oth­ers might dis­agree on this.

Might to­day be merely an enor­mously in­fluen­tial time?

In re­sponse to the ar­gu­ments I’ve given above, you might say: “Ok, per­haps we don’t have good rea­sons for think­ing that we’re at the most in­fluen­tial time in his­tory. But the ar­gu­ments sup­port the idea that we’re at an enor­mously in­fluen­tial time. And very lit­tle changes whether you think that we’re at the most in­fluen­tial time ever, or merely at an enor­mously in­fluen­tial time, even though some fu­ture time is even more in­fluen­tial again.”

How­ever, I don’t think this re­sponse is a good one, for three rea­sons.

First, the im­pli­ca­tion that we’re among the very most in­fluen­tial times is sus­cep­ti­ble to very similar ar­gu­ments to the ones that I gave against HoH. The idea that we’re in one of the top-10 most in­fluen­tial times is 10x more a pri­ori likely than the claim that we’re in the most in­fluen­tial time, and it’s per­haps more than 10x less fishy. But it’s still ex­tremely a pri­ori un­likely, and still very fishy. So that should make us very doubt­ful of the claim, in the ab­sence of ex­traor­di­nar­ily pow­er­ful ar­gu­ments in its favour.

Se­cond, some views that are held in the effec­tive al­tru­ism com­mu­nity seem to im­ply not just that we’re at some very in­fluen­tial time, but that we’re at the most in­fluen­tial time ever. On the fast take­off story as­so­ci­ated with Bostrom and Yud­kowsky, once we de­velop AGI we rapidly end up with a uni­verse de­ter­mined in line with a sin­gle­ton su­per­in­tel­li­gence’s val­ues, or in line with the val­ues of those who man­age to con­trol it. Either way, it’s the de­ci­sive mo­ment for the en­tire rest of civil­i­sa­tion. But if you find the claim that we’re at the most in­fluen­tial time ever hard to swal­low, then you have, by modus tol­lens, to re­ject that story of the de­vel­op­ment of su­per­in­tel­li­gence.

Third, even if we’re at some enor­mously in­fluen­tial time right now, if there’s some fu­ture time that is even more in­fluen­tial, then the most ob­vi­ous EA ac­tivity would be to in­vest re­sources (whether via fi­nan­cial in­vest­ment or some sort of val­ues-spread­ing) in or­der that our re­sources can be used at that fu­ture, more high-im­pact, time. Per­haps there’s some rea­son why that plan doesn’t make sense; but, cur­rently, al­most no-one is even tak­ing that pos­si­bil­ity se­ri­ously.

Pos­si­ble other hinge times

If now isn’t the most in­fluen­tial time ever, when is? I’m not go­ing to claim to be able to an­swer that ques­tion, but in or­der to help make al­ter­na­tive pos­si­bil­ities more vivid I’ve put to­gether a list of times in the past and fu­ture that seem par­tic­u­larly hingey to me.

Of course, it’s much more likely, a pri­ori, that if HoH is false, then the most in­fluen­tial time is in the fu­ture. And we should also care more about the hingey­ness of fu­ture times than of past times, be­cause we can try to save re­sources to af­fect fu­ture times, but we know we can’t af­fect past times.[6] But past hingey­ness might still be rele­vant for as­sess­ing hingey­ness to­day: If hingey­ness has been con­tinu­ally de­creas­ing over time, that gives us some rea­son for think­ing that the pre­sent time is more in­fluen­tial than any fu­ture time; if it’s been up and down, or in­creas­ing over time, that might give us ev­i­dence for think­ing that some fu­ture time will be more in­fluen­tial.

Look­ing through his­tory, some can­di­dates for par­tic­u­larly in­fluen­tial times might in­clude the fol­low­ing (though in al­most ev­ery case, it seems to me, the peo­ple of the time would have been too in­tel­lec­tu­ally im­pov­er­ished to have known how hingey their time was and been able to do any­thing about it[7]):

  • The hunter-gath­erer era, which offered in­di­vi­d­u­als the abil­ity to have a much larger im­pact on tech­nolog­i­cal progress than to­day.

  • The Ax­ial age, which offered op­por­tu­ni­ties to in­fluence the for­ma­tion of what are to­day the ma­jor world re­li­gions.

  • The colo­nial pe­riod, which offered op­por­tu­ni­ties to in­fluence the for­ma­tion of na­tions, their con­sti­tu­tions and val­ues.

  • The for­ma­tion of the USA, es­pe­cially at the time just be­fore, dur­ing and af­ter the Philadelphia Con­ven­tion when the Con­sti­tu­tion was cre­ated.

  • World War II, and the re­sul­tant com­par­a­tive in­fluence of liber­al­ism vs fas­cism over the world.

  • The post-WWII for­ma­tion of the first some­what effec­tive in­ter­gov­ern­men­tal in­sti­tu­tions like the UN.

  • The Cold War, and the re­sul­tant com­par­a­tive in­fluence of liber­al­ism vs com­mu­nism over the world.

In con­trast, if the hingiest times are in the fu­ture, it’s likely that this is for rea­sons that we haven’t thought of. But there are fu­ture sce­nar­ios that we can imag­ine now that would seem very in­fluen­tial:

  • If there is a fu­ture and fi­nal World War, re­sult­ing in a unified global cul­ture, the out­come of that war could partly de­ter­mine what val­ues in­fluence the long-run fu­ture.

  • If one re­li­gion ul­ti­mately out­com­petes both athe­ism and other re­li­gions and be­comes a world re­li­gion, then the val­ues em­bod­ied in that re­li­gion could partly de­ter­mine what val­ues in­fluence the long-run fu­ture.[8]

  • If a world gov­ern­ment is formed, whether dur­ing peace­time or as a re­sult of a fu­ture World War, then the con­sti­tu­tion em­bod­ied in that could con­strain de­vel­op­ment over the long-run fu­ture, whether by per­sist­ing in­definitely, hav­ing knock-on effects on fu­ture in­sti­tu­tions, or by in­fluenc­ing how some other lock-in event takes place.

  • The time at which set­tle­ment of other so­lar sys­tems be­gins could be highly in­fluen­tial for longter­mists. For ex­am­ple, the own­er­ship of other so­lar sys­tems could be de­ter­mined by an auc­tion among na­tions and/​or com­pa­nies and in­di­vi­d­u­als (much as the USA pur­chased Alaska and a sig­nifi­cant por­tion of the mid­west in the 19th cen­tury[9]); or by an es­sen­tially lawless race be­tween na­tions (as hap­pened with Euro­pean colon­i­sa­tion); or through war (as has hap­pened through­out his­tory). If the re­turns from in­ter­stel­lar set­tle­ment pay off only over very long timescales (which seems likely), and if most of the de­ci­sion-mak­ers of the time still in­trin­si­cally dis­count fu­ture benefits, then longter­mists at the time would be able to cheaply buy huge in­fluence over the fu­ture.

  • The time when the set­tle­ment of other galax­ies be­gins, which might obey similar dy­nam­ics to the set­tle­ment of other so­lar sys­tems.


I said at the start that it’s non-ob­vi­ous what fol­lows, for the pur­poses of ac­tion, from out­side-view longter­mism. The most ob­vi­ous course of ac­tion that might seem com­par­a­tively more promis­ing is in­vest­ment, such as sav­ing in a long-term foun­da­tion, or move­ment-build­ing, with the aim of in­creas­ing the amount of re­sources longter­mist al­tru­ists have at a fu­ture, more hingey time. And, if one finds my sec­ond ar­gu­ment com­pel­ling, then re­search, es­pe­cially into so­cial sci­ence and moral and poli­ti­cal philos­o­phy, might also seem un­usu­ally promis­ing.

Th­ese are ac­tivi­ties that seem like they would have been good strate­gies across many times in the past. If we think that to­day is not ex­cep­tion­ally differ­ent from times in the past, this gives us rea­son to think that they are good strate­gies now, too.

[1] The ques­tion of what ‘re­sources’ in this con­text are is tricky. As a work­ing defi­ni­tion, I’ll use 1 mega­joule of stored but use­able en­ergy, where I’ll al­low the form of stored en­ergy to vary over time: so it could be in the form of grain in the past, oil to­day, and an­ti­mat­ter in the fu­ture.

[2] H/​T to Carl Shul­man for this won­der­ful quote from C.S. Lewis, The Abo­li­tion of Man: “In or­der to un­der­stand fully what Man’s power over Na­ture, and there­fore the power of some men over other men, re­ally means, we must pic­ture the race ex­tended in time from the date of its emer­gence to that of its ex­tinc­tion. Each gen­er­a­tion ex­er­cises power over its suc­ces­sors: and each, in so far as it mod­ifies the en­vi­ron­ment be­queathed to it and rebels against tra­di­tion, re­sists and limits the power of its pre­de­ces­sors. This mod­ifies the pic­ture which is some­times painted of a pro­gres­sive eman­ci­pa­tion from tra­di­tion and a pro­gres­sive con­trol of nat­u­ral pro­cesses re­sult­ing in a con­tinual in­crease of hu­man power. In re­al­ity, of course, if any one age re­ally at­tains, by eu­gen­ics and sci­en­tific ed­u­ca­tion, the power to make its de­scen­dants what it pleases, all men who live af­ter it are the pa­tients of that power. They are weaker, not stronger: for though we may have put won­der­ful ma­chines in their hands we have pre-or­dained how they are to use them. And if, as is al­most cer­tain, the age which had thus at­tained max­i­mum power over pos­ter­ity were also the age most eman­ci­pated from tra­di­tion, it would be en­gaged in re­duc­ing the power of its pre­de­ces­sors al­most as dras­ti­cally as that of its suc­ces­sors. And we must also re­mem­ber that, quite apart from this, the later a gen­er­a­tion comes — the nearer it lives to that date at which the species be­comes ex­tinct—the less power it will have in the for­ward di­rec­tion, be­cause its sub­jects will be so few. There is there­fore no ques­tion of a power vested in the race as a whole steadily grow­ing as long as the race sur­vives. The last men, far from be­ing the heirs of power, will be of all men most sub­ject to the dead hand of the great plan­ners and con­di­tion­ers and will them­selves ex­er­cise least power upon the fu­ture.

The real pic­ture is that of one dom­i­nant age—let us sup­pose the hun­dredth cen­tury A.D.—which re­sists all pre­vi­ous ages most suc­cess­fully and dom­i­nates all sub­se­quent ages most ir­re­sistibly, and thus is the real mas­ter of the hu­man species. But then within this mas­ter gen­er­a­tion (it­self an in­finites­i­mal minor­ity of the species) the power will be ex­er­cised by a minor­ity smaller still. Man’s con­quest of Na­ture, if the dreams of some sci­en­tific plan­ners are re­al­ized, means the rule of a few hun­dreds of men over billions upon billions of men. There nei­ther is nor can be any sim­ple in­crease of power on Man’s side. Each new power won by man is a power over man as well. Each ad­vance leaves him weaker as well as stronger. In ev­ery vic­tory, be­sides be­ing the gen­eral who triumphs, he is also the pris­oner who fol­lows the triumphal car.”

[3] Quan­ti­ta­tively: Th­ese con­sid­er­a­tions push me to put my pos­te­rior on HoH into some­thing like the [1%, 0.1%] in­ter­val. But this cre­dence in­ter­val feels very made-up and very un­sta­ble.

[4] Th­ese are just anec­dotes, and I’d love to see some­one un­der­take a thor­ough in­ves­ti­ga­tion of how of­ten peo­ple tend to over­re­act vs un­der­re­act to tech­nolog­i­cal de­vel­op­ments, es­pe­cially in terms of risk-as­sess­ment and safety. As well as for helping us un­der­stand how likely we are to be bi­ased, this is rele­vant to how much we should ex­pect other ac­tors in the com­ing decades to in­vest in safety with re­spect to AI and syn­thetic biol­ogy.

[5] I note that this ar­gu­ment has been in­de­pen­dently gen­er­ated quite a num­ber of times by differ­ent peo­ple.

[6] Though if one en­dorses non-causal de­ci­sion the­ory, those times might still be de­ci­sion-rele­vant.

[7] An ex­cep­tion might have been some of the US found­ing fathers. For ex­am­ple, John Adams, the sec­ond US Pres­i­dent, com­mented that: “The in­sti­tu­tions now made in Amer­ica will not wholly wear out for thou­sands of years. It is of the last im­por­tance, then, that they should be­gin right. If they set out wrong, they will never be able to re­turn, un­less by ac­ci­dent, to the right path.” (H/​T Chris­tian Tarsney for the quote.)

[8] If you’re an athe­ist, it’s easy to think it’s in­evitable that athe­ists will win out in the end. But be­cause of differ­ences in fer­til­ity rate, the global pro­por­tion of fun­da­men­tal­ists is pre­dicted to rise and the pro­por­tion of athe­ists is pre­dicted to de­cline. What’s more, re­li­gios­ity is mod­er­ately her­i­ta­ble, so these differ­ences could com­pound into the fu­ture. For dis­cus­sion, see Shall the re­li­gious in­herit the earth? by Eric Kauf­man.

[9] Some num­bers on this: The Louisi­ana pur­chase cost $15 mil­lion at the time, or $250 mil­lion in to­day’s money, for what is now 23.3% of US ter­ri­tory. https://​​​​com­po­nent/​​con­tent/​​ar­ti­cle/​​155/​​25993.html Alaska cost $120 mil­lion in to­day’s money; its GDP to­day is $54 billion per year. https://​​­louis­​​se­ries/​​AKNGSP