Crucial questions for longtermists

This post was writ­ten for Con­ver­gence Anal­y­sis. It in­tro­duces a col­lec­tion of “cru­cial ques­tions for longter­mists”: im­por­tant ques­tions about the best strate­gies for im­prov­ing the long-term fu­ture. This col­lec­tion is in­tended to serve as an aide to thought and com­mu­ni­ca­tion, a kind of re­search agenda, and a kind of struc­tured read­ing list.

Introduction

The last decade saw sub­stan­tial growth in the amount of at­ten­tion, tal­ent, and fund­ing flow­ing to­wards ex­is­ten­tial risk re­duc­tion and longter­mism. There are many differ­ent strate­gies, risks, or­gani­sa­tions, etc. to which these re­sources could flow. How can we di­rect these re­sources in the best way? Why were these re­sources di­rected as they were? Are peo­ple able to un­der­stand and cri­tique the be­liefs un­der­ly­ing var­i­ous views—in­clud­ing their own—re­gard­ing how best to put longter­mism into prac­tice?

Re­lat­edly, the last decade also saw sub­stan­tial growth in the amount of re­search and thought on is­sues im­por­tant to longter­mist strate­gies. But this is scat­tered across a wide ar­ray of ar­ti­cles, blogs, books, pod­casts, videos, etc. Ad­di­tion­ally, these pieces of re­search and thought of­ten use differ­ent terms for similar things, or don’t clearly high­light how par­tic­u­lar be­liefs, ar­gu­ments, and ques­tions fit into var­i­ous big­ger pic­tures. This can make it harder to get up to speed with, form in­de­pen­dent views on, and col­lab­o­ra­tively sculpt the vast land­scape of longter­mist re­search and strat­egy.

To help ad­dress these is­sues, this post col­lects, or­ganises, high­lights con­nec­tions be­tween, and links to sources rele­vant to a large set of the “cru­cial ques­tions” for longter­mists.[1] Th­ese are ques­tions whose an­swers might be “cru­cial con­sid­er­a­tions”—that is, con­sid­er­a­tions which are “likely to cause a ma­jor shift of our view of in­ter­ven­tions or ar­eas”.

We col­lect these ques­tions into top­ics, and then pro­gres­sively then pro­gres­sively break “top-level ques­tions” down into the lower-level “sub-ques­tions” that feed into them. For ex­am­ple, the topic “Op­ti­mal timing of work and dona­tions” in­cludes the top-level ques­tion “How will ‘lev­er­age over the fu­ture” change over time?’, which is bro­ken down into (among other things) “How will the ne­glect­ed­ness of longter­mist causes change over time?” We also link to Google docs con­tain­ing many rele­vant links and notes.

What kind of ques­tions are we in­clud­ing?

The post A case for strat­egy re­search vi­su­al­ised the “re­search spine of effec­tive al­tru­ism” as fol­lows:

This post can be seen as col­lect­ing ques­tions rele­vant to the “strat­egy” level.

One could imag­ine a ver­sion of this post that “zooms out” to dis­cuss cru­cial ques­tions on the “val­ues” level, or ques­tions about cause pri­ori­ti­sa­tion as a whole. This might in­volve more em­pha­sis on ques­tions about, for ex­am­ple, pop­u­la­tion ethics, the moral sta­tus of non­hu­man an­i­mals, and the effec­tive­ness of cur­rently available global health in­ter­ven­tions. But here we in­stead (a) mostly set ques­tions about moral­ity aside, and (b) take longter­mism as a start­ing as­sump­tion.[2]

One could also imag­ine a ver­sion of this post that “zooms in” on one spe­cific topic we provide only a high-level view of, and that dis­cusses that in more de­tail than we do. This could be con­sid­ered to be work on “tac­tics”, or on “strat­egy” within some nar­rower do­main. An ex­am­ple of some­thing like that is the post Clar­ify­ing some key hy­pothe­ses in AI al­ign­ment. That sort of work is highly valuable, and we’ll provide many links to such work. But the scope of this post it­self will be re­stricted to the rel­a­tively high-level ques­tions, to keep the post man­age­able and avoid read­ers (or us) los­ing sight of the for­est for the trees.[3]

Fi­nally, we’re mostly fo­cused on:

  • Ques­tions about which differ­ent longter­mists have differ­ent be­liefs, with those be­liefs play­ing an ex­plicit role in their strate­gic views and choices

  • Ques­tions about which some longter­mists think learn­ing more or chang­ing their be­liefs would change their strate­gic views and choices

  • Ques­tions which it ap­pears some longter­mists haven’t no­ticed at all, the notic­ing of which might in­fluence those longter­mists’ strate­gic views and choices

Th­ese can be seen as ques­tions that re­veal a “dou­ble crux” that ex­plains the differ­ent strate­gies of differ­ent longter­mists. We thus ex­clude ques­tions about which prac­ti­cally, or by defi­ni­tion, all longter­mists agree.

A high-level overview of the cru­cial ques­tions for longtermists

Here we provide our cur­rent col­lec­tion and struc­tur­ing of cru­cial ques­tions for longter­mists. The linked Google docs con­tain some fur­ther in­for­ma­tion and a wide range of links to rele­vant sources, and I in­tend to con­tinue adding new links in those docs for the fore­see­able fu­ture.

“Big pic­ture” ques­tions (i.e., not about spe­cific tech­nolo­gies, risks, or risk fac­tors)

See here for notes and links re­lated to these top­ics.

  • Value of, and best ap­proaches to, ex­is­ten­tial risk reduction

    • How “good” might the fu­ture be, if no ex­is­ten­tial catas­tro­phe oc­curs?[4]

      • What is the pos­si­ble scale of the hu­man-in­fluenced fu­ture?

      • What is the pos­si­ble du­ra­tion of the hu­man-in­fluenced fu­ture?

      • What is the pos­si­ble qual­ity of the hu­man-in­fluenced fu­ture?

        • How does the “difficulty” or “cost” of cre­at­ing plea­sure vs. pain com­pare?

      • Can and will we ex­pand into space? In what ways, and to what ex­tent? What are the im­pli­ca­tions?

        • Will we pop­u­late colonies with (some) non­hu­man an­i­mals, e.g. through ter­raform­ing?

      • Can and will we cre­ate sen­tient digi­tal be­ings? To what ex­tent? What are the im­pli­ca­tions?

        • Would their ex­pe­riences mat­ter morally?

        • Will some be cre­ated ac­ci­den­tally?

    • How “bad” would the fu­ture be, if an ex­is­ten­tial catas­tro­phe oc­curs? How does this differ be­tween differ­ent ex­is­ten­tial catas­tro­phes?

      • How likely is fu­ture evolu­tion of moral agents or pa­tients on Earth, con­di­tional on (var­i­ous differ­ent types of) ex­is­ten­tial catas­tro­phe? How valuable would that fu­ture be?

      • How likely is it that our ob­serv­able uni­verse con­tains ex­trater­res­trial in­tel­li­gence (ETI)? How valuable would a fu­ture in­fluenced by them rather than us be?

    • How high is to­tal ex­is­ten­tial risk? How will the risk change over time?[5]

    • Where should we be on the “nar­row vs. broad” spec­trum of ap­proaches to ex­is­ten­tial risk re­duc­tion?

    • To what ex­tent will efforts fo­cused on global catas­trophic risks, or smaller risks, also help with ex­is­ten­tial risks?

  • Value of, and best ap­proaches to, im­prov­ing as­pects of the fu­ture other than whether an ex­is­ten­tial catas­tro­phe oc­curs[6]

    • What prob­a­bil­ity dis­tri­bu­tion over var­i­ous tra­jec­to­ries of the fu­ture should we ex­pect?[7]

      • How good have tra­jec­to­ries been in the past?

      • How close to the ap­pro­pri­ate size should we ex­pect in­fluen­tial agents’ moral cir­cles to be “by de­fault”?

      • How much in­fluence should we ex­pect al­tru­ism to have on fu­ture tra­jec­to­ries “by de­fault”?[8]

      • How likely is it that self-in­ter­est alone would lead to good tra­jec­to­ries “by de­fault”?

    • How does speed­ing up de­vel­op­ment af­fect the ex­pected value of the fu­ture?[9]

      • How does speed­ing up de­vel­op­ment af­fect ex­is­ten­tial risk?

      • How does speed­ing up de­vel­op­ment af­fect as­tro­nom­i­cal waste? How much should we care?

        • With each year that passes with­out us tak­ing cer­tain ac­tions (e.g., be­gin­ning to colon­ise space), what amount or frac­tion of re­sources do we lose the abil­ity to ever use?

        • How morally im­por­tant is los­ing the abil­ity to ever use that amount or frac­tion of re­sources?

      • How does speed­ing up de­vel­op­ment af­fect other as­pects of our ul­ti­mate tra­jec­tory?

    • What are the best ac­tions for speed­ing up de­vel­op­ment? How good are they?

    • Other than speed­ing up de­vel­op­ment, what are the best ac­tions for im­prov­ing as­pects of the fu­ture other than whether an ex­is­ten­tial catas­tro­phe oc­curs? How valuable are those ac­tions?

      • How valuable are var­i­ous types of moral ad­vo­cacy? What are the best ac­tions for that?

    • How “clue­less” are we?

    • Should we find claims of con­ver­gence be­tween effec­tive­ness for near-term goals and effec­tive­ness for im­prov­ing as­pects of the fu­ture other than whether an ex­is­ten­tial catas­tro­phe oc­curs “sus­pi­cious”? If so, how sus­pi­cious?

  • Value of, and best ap­proaches to, work re­lated to “other”, un­no­ticed, and/​or un­fore­seen risks, in­ter­ven­tions, causes, etc.

    • What are some plau­si­bly im­por­tant risks, in­ter­ven­tions, causes, etc. that aren’t men­tioned in the other “cru­cial ques­tions”? How should the an­swer change our strate­gies (if at all)?

    • How likely is it that there are im­por­tant un­no­ticed and/​or un­fore­seen risks, in­ter­ven­tions, causes, etc.? What should we do about that?

      • How of­ten have we dis­cov­ered new risks, in­ter­ven­tions, causes, etc. in the past? How is that rate chang­ing over time? What can be in­ferred from that?

      • How valuable is “hori­zon-scan­ning”? What are the best ap­proaches to that?

  • Op­ti­mal timing for work/​donations

    • How will “lev­er­age over the fu­ture” change over time?

      • What should be our prior re­gard­ing how lev­er­age over the fu­ture will change? What does the “out­side view” say?

      • How will our knowl­edge about what we should do change over time?

      • How will the ne­glect­ed­ness of longter­mist causes change over time?

      • What “win­dows of op­por­tu­nity” might there be? When might those win­dows open and close? How im­por­tant are they?

      • Are we bi­ased to­wards think­ing the lev­er­age over the fu­ture is cur­rently un­usu­ally high? If so, how bi­ased?

        • How of­ten have peo­ple been wrong about such things in the past?

      • If lev­er­age over the fu­ture is higher at a later time, would longter­mists no­tice?

    • How effec­tively can we “punt to the fu­ture”?

      • What would be the long-term growth rate of fi­nan­cial in­vest­ments?

      • What would be the long-term rate of ex­pro­pri­a­tion of fi­nan­cial in­vest­ments? How does this vary as in­vest­ments grow larger?

      • What would be the long-term “growth rate” from other punt­ing ac­tivi­ties?

      • Would the peo­ple we’d be punt­ing to act in ways we’d en­dorse?

    • Which “di­rect” ac­tions might have com­pound­ing pos­i­tive im­pacts?

    • Do marginal re­turns to “di­rect work” done within a given time pe­riod diminish? If so, how steeply?

  • Tractabil­ity of, and best ap­proaches to, es­ti­mat­ing, fore­cast­ing, and in­ves­ti­gat­ing fu­ture developments

    • How good are peo­ple at fore­cast­ing fu­ture de­vel­op­ments in gen­eral?

      • How good are peo­ple at fore­cast­ing im­pacts of tech­nolo­gies?

        • How of­ten do peo­ple over- vs. un­der­es­ti­mate risks from new tech? Should we think we might be do­ing that?

    • What are the best meth­ods for fore­cast­ing fu­ture de­vel­op­ments?

  • Value of, and best ap­proaches to, com­mu­ni­ca­tion and move­ment-building

    • When should we be con­cerned about in­for­ma­tion haz­ards? How con­cerned? How should we re­spond?

    • When should we have other con­cerns or rea­sons for cau­tion about com­mu­ni­ca­tion? How should we re­spond?

    • What are the pros and cons of ex­pand­ing longter­mism-rele­vant move­ments in var­i­ous ways?

      • What are the pros and cons of peo­ple who lack highly rele­vant skills be­ing in­cluded in longter­mism-rele­vant move­ments?

      • What are the pros and cons of peo­ple who don’t work full-time on rele­vant is­sues be­ing in­cluded in longter­mism-rele­vant move­ments?

  • Com­par­a­tive ad­van­tage of longtermists

    • How much im­pact should we ex­pect longter­mists to be able to have as a re­sult of be­ing more com­pe­tent than non-longter­mists? How does this vary be­tween differ­ent ar­eas, ca­reer paths, etc.?

      • Gen­er­ally speak­ing, how com­pe­tent, “sane”, “wise”, etc. are ex­ist­ing so­ciety, elites, “ex­perts”, etc?

    • How much im­pact should we ex­pect longter­mists to be able to have as a re­sult of hav­ing “bet­ter val­ues/​goals” than non-longter­mists? How does this vary be­tween differ­ent ar­eas, ca­reer paths, etc.?

      • Gen­er­ally speak­ing, how al­igned with “good val­ues/​goals” (rather than with worse val­ues, lo­cal in­cen­tives, etc.) are the ac­tions of ex­ist­ing so­ciety, elites, “ex­perts”, etc.?

Ques­tions about emerg­ing technologies

See here for notes and links re­lated to these top­ics.

  • Value of, and best ap­proaches to, work re­lated to AI

    • Is it pos­si­ble to build an ar­tifi­cial gen­eral in­tel­li­gence (AGI) and/​or trans­for­ma­tive AI (TAI) sys­tem? Is hu­man­ity likely to do so?

    • What form(s) is TAI likely to take? What are the im­pli­ca­tions of that? (E.g., AGI agents vs com­pre­hen­sive AI ser­vices)

    • What will the timeline of AI de­vel­op­ments be?

      • How “hard” are var­i­ous AI de­vel­op­ments?

      • How much “effort” will go into var­i­ous AI de­vel­op­ments?

      • How dis­con­tin­u­ous will AI de­vel­op­ment be?

        • Will de­vel­op­ment to hu­man-level AI be dis­con­tin­u­ous? How much so?

        • Will de­vel­op­ment from hu­man-level AI be dis­con­tin­u­ous? How much so?

        • Will there be a hard­ware over­hang? How much would that change things?

      • How im­por­tant are in­di­vi­d­ual in­sights and “lumpy” de­vel­op­ments?

      • Will we know when TAI is com­ing soon? How far in ad­vance? How con­fi­dently?

      • What are the rele­vant past trends? To what ex­tent should we ex­pect them to con­tinue?

    • How much should longter­mists’ pri­ori­tise AI?

      • How high is ex­is­ten­tial risk from AI?

      • How “hard” is AI safety?

        • How “hard” are non-im­pos­si­ble tech­ni­cal prob­lems in gen­eral?

        • To what ex­tent can we in­fer that the prob­lem is hard from failure or challenges thus far?

      • Should we ex­pect peo­ple to han­dle AI safety and gov­er­nance is­sues ad­e­quately with­out longter­mist in­ter­ven­tion?

        • To what ex­tent will “safety” prob­lems be solved sim­ply in or­der to in­crease “ca­pa­bil­ity” or “eco­nomic use­ful­ness”?

        • Would there be clearer ev­i­dence of AI risk in fu­ture, if it’s in­deed quite risky? Will that lead to bet­ter be­havi­ours re­gard­ing AI safety and gov­er­nance?

      • Could AI pose suffer­ing risks? Is it the most likely source of such risks?

      • How likely are pos­i­tive or nega­tive “non-ex­is­ten­tial tra­jec­tory changes” as a re­sult of AI-re­lated events? To what ex­tent does that mean longter­mists should pri­ori­tise AI?

    • What forms might an AI catas­tro­phe take? How likely is each?

    • What are the best ap­proaches to re­duc­ing AI risk or in­creas­ing AI benefits?

      • From a longter­mist per­spec­tive, how valuable are ap­proaches fo­cused on rel­a­tively “near-term” or “less ex­treme” is­sues?

      • What down­side risks might (var­i­ous forms of) work to re­duce AI risk have? How big are those down­side risks?

        • How likely is it that (var­i­ous forms of) work to re­duce AI risk would ac­cel­er­ate the de­vel­op­ment of AI? Would that in­crease over­all ex­is­ten­tial risk?

      • How im­por­tant is AI gov­er­nance/​strat­egy/​policy work? Which types are most im­por­tant, and why?

  • Value of, and best ap­proaches to, work re­lated to biorisk[10] and biotechnology

    • What will the timeline of biotech de­vel­op­ments be?

      • How “hard” are var­i­ous biotech de­vel­op­ments?

      • How much “effort” will go into var­i­ous biotech de­vel­op­ments?

    • How much should longter­mists’ pri­ori­tise biorisk and biotech?

      • How high is ex­is­ten­tial risk from pan­demics in­volv­ing syn­thetic biol­ogy?

        • Should we be more con­cerned about ac­ci­den­tal or de­liber­ate cre­ation of dan­ger­ous pathogens? Should we be more con­cerned about ac­ci­den­tal or de­liber­ate re­lease? What kinds of ac­tors should we be most con­cerned about?

      • How high is ex­is­ten­tial risk from nat­u­rally aris­ing pan­demics?

        • To what ex­tent does the usual “nat­u­ral risks must be low” ar­gu­ment ap­ply to nat­u­ral pan­demics?

      • What can we (cur­rently) learn from pre­vi­ous pan­demics, near misses, etc.?

      • How high is the risk from an­timicro­bial re­sis­tance?

    • How much over­lap is there be­tween ap­proaches fo­cused on nat­u­ral vs. an­thro­pogenic pan­demics, “reg­u­lar” vs. “ex­treme” risks, etc.?

    • What are the best ap­proaches to re­duc­ing biorisk?

      • What down­side risks might (var­i­ous forms of) work to re­duce biorisk have? How big are those down­side risks?

  • Value of, and best ap­proaches to, work re­lated to nanotechnology

    • What will the timeline of nan­otech de­vel­op­ments be?

      • How “hard” are var­i­ous nan­otech de­vel­op­ments?

      • How much “effort” will go into var­i­ous nan­otech de­vel­op­ments?

    • How high is the ex­is­ten­tial risk from nan­otech?

    • What are the best ap­proaches to re­duc­ing risks from nan­otech­nol­ogy?

      • What down­side risks might (var­i­ous forms of) work to re­duce risks from nan­otech have? How big are those down­side risks?

  • Value of, and best ap­proaches to, work re­lated to in­ter­ac­tions and con­ver­gences be­tween differ­ent emerg­ing technologies

Ques­tions about spe­cific ex­is­ten­tial risks (which weren’t cov­ered above)

See here for notes and links re­lated to these top­ics.

  • Value of, and best ap­proaches to, work re­lated to nu­clear weapons

    • How high is the ex­is­ten­tial risk from nu­clear weapons?

      • How likely are var­i­ous types of nu­clear war?

        • What coun­tries would most likely be in­volved in a nu­clear war?

        • How many weapons would likely be used in a nu­clear war?

        • How likely is coun­terforce vs. coun­ter­value tar­get­ing?

        • How likely are ac­ci­den­tal launches?

        • How likely is es­ca­la­tion from ac­ci­den­tal launch to nu­clear war?

      • How likely are var­i­ous sever­i­ties of nu­clear win­ter (given a cer­tain type and sever­ity of nu­clear war)?

      • What would be the im­pacts of var­i­ous sever­i­ties of nu­clear win­ter?

  • Value of, and best ap­proaches to, work re­lated to cli­mate change

    • How high is the ex­is­ten­tial risk from cli­mate change it­self (not from geo­eng­ineer­ing)?

      • How much cli­mate change is likely to oc­cur?

      • What would be the im­pacts of var­i­ous lev­els of cli­mate change?

      • How likely are var­i­ous mechanisms for run­away/​ex­treme cli­mate change?

    • How tractable and risky are var­i­ous forms of geo­eng­ineer­ing?

      • How likely is it that risky geo­eng­ineer­ing could be unilat­er­ally im­ple­mented?

    • How much does cli­mate change in­crease other ex­is­ten­tial risks?

  • Value of, and best ap­proaches to, work re­lated to to­tal­i­tar­i­anism and dystopias

    • How high is the ex­is­ten­tial risk from to­tal­i­tar­i­anism and dystopias?

      • How likely is the rise of a global to­tal­i­tar­ian or dystopian regime?

      • How likely is it that a global to­tal­i­tar­ian or dystopian regime that arose would last long enough to rep­re­sent or cause an ex­is­ten­tial catas­tro­phe?

    • Which poli­ti­cal changes could in­crease or de­crease ex­is­ten­tial risks from to­tal­i­tar­i­anism and dystopia? By how much? What other effects would those poli­ti­cal changes have on the long-term fu­ture?

      • Would var­i­ous shifts to­wards world gov­ern­ment or global poli­ti­cal co­he­sion in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much? Would those shifts re­duce other risks?

      • Would en­hanced or cen­tral­ised state power in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much? Would it re­duce other risks?

    • Which tech­nolog­i­cal changes could in­crease or de­crease ex­is­ten­tial risks from to­tal­i­tar­i­anism and dystopia? By how much? What other effects would those poli­ti­cal changes have on the long-term fu­ture?

      • Would fur­ther de­vel­op­ment or de­ploy­ment of surveillance tech­nol­ogy in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much? Would it re­duce other risks?

      • Would fur­ther de­vel­op­ment or de­ploy­ment of AI for po­lice or mil­i­tary pur­poses in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much? Would it re­duce other risks?

      • Would fur­ther de­vel­op­ment or de­ploy­ment of ge­netic en­g­ineer­ing in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much? Would it re­duce other risks?

      • Would fur­ther de­vel­op­ment or de­ploy­ment of other tech­nolo­gies for in­fluenc­ing/​con­trol­ling peo­ple’s val­ues in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much?

      • Would fur­ther de­vel­op­ment or de­ploy­ment of life ex­ten­sion tech­nolo­gies in­crease risks from to­tal­i­tar­i­anism and dystopia? By how much?

Ques­tions about non-spe­cific risks, ex­is­ten­tial risk fac­tors, or ex­is­ten­tial se­cu­rity factors

See here for notes and links re­lated to these top­ics.

  • Value of, and best ap­proaches to, work re­lated to global catas­tro­phes and/​or civ­i­liza­tional collapse

    • How much should we be con­cerned by pos­si­ble con­cur­rence, com­bi­na­tions, or cas­cades of catas­tro­phes?

    • How much worse in ex­pec­ta­tion would a global catas­tro­phe make our long-term tra­jec­tory?

      • How effec­tively, if at all, would a global catas­tro­phe serve as a warn­ing shot?

      • What can we (cur­rently) learn from pre­vi­ous global catas­tro­phes (or things that came close to be­ing global catas­tro­phes)?

    • How likely is col­lapse, given var­i­ous in­ten­si­ties of catas­tro­phe?

      • How re­silient is so­ciety?

    • How likely would a col­lapse make each of the fol­low­ing out­comes: Ex­tinc­tion; per­ma­nent stag­na­tion; re­cur­rent col­lapse; “scarred” re­cov­ery; full re­cov­ery?

      • What’s the min­i­mum vi­able hu­man pop­u­la­tion (from the per­spec­tive of ge­netic di­ver­sity)?

      • How likely is eco­nomic and tech­nolog­i­cal re­cov­ery from col­lapse?

        • What pop­u­la­tion size is re­quired for eco­nomic spe­cial­i­sa­tion, tech­nolog­i­cal de­vel­op­ment, etc.?

      • Might we have a “scarred” re­cov­ery, in which our long-term tra­jec­tory re­mains worse in ex­pec­ta­tion de­spite eco­nomic and tech­nolog­i­cal re­cov­ery? How im­por­tant is this pos­si­bil­ity?

      • What can we (cur­rently) learn from pre­vi­ous col­lapses of spe­cific so­cieties, or near-col­lapses?

    • What are the best ap­proaches for im­prov­ing miti­ga­tion of, re­silience to, and re­cov­ery from global catas­tro­phes and/​or col­lapse (rather than pre­vent­ing them)? How valuable are these ap­proaches?

  • Value of, and best ap­proaches to, work re­lated to war

    • By how much does the pos­si­bil­ity of var­i­ous types of wars raise to­tal ex­is­ten­tial risk?

      • How likely are wars of var­i­ous types of wars?

        • How likely are great power wars?

    • By how much do wars of var­i­ous types in­crease ex­is­ten­tial risk?

      • By how much do great power wars in­crease ex­is­ten­tial risk?

  • Value of, and best ap­proaches to, work re­lated to im­prov­ing in­sti­tu­tions and/​or de­ci­sion-making

  • Value of, and best ap­proaches to, work re­lated to ex­is­ten­tial se­cu­rity and the Long Reflection

    • Can we achieve ex­is­ten­tial se­cu­rity? How?

    • Are there down­sides to pur­su­ing ex­is­ten­tial se­cu­rity? If so, how large are they?

    • How im­por­tant is it that we have a Long Reflec­tion pro­cess? What should such a pro­cess in­volve? How can we best pre­pare for and set up such a pro­cess?

We have also col­lected here some ques­tions that seem less im­por­tant, or where it’s not clear that there’s re­ally dis­agree­ment on them that fuels differ­ences in strate­gic views and choices among longter­mists. Th­ese in­clude ques­tions about “nat­u­ral” risks (other than “nat­u­ral” pan­demics, which some of the above ques­tions already ad­dressed).

Direc­tions for fu­ture work

We’ll soon pub­lish a post dis­cussing in more depth the topic of op­ti­mal timing for work and dona­tions. We’d also be ex­cited to see fu­ture work which:

  • Pro­vides that sort of more de­tailed dis­cus­sion for other top­ics raised in this post

  • At­tempts to ac­tu­ally an­swer some of these ques­tions, or to at least provide rele­vant ar­gu­ments, ev­i­dence, etc.

  • Iden­ti­fies ad­di­tional cru­cial questions

  • High­lights ad­di­tional rele­vant references

  • Fur­ther dis­cusses how be­liefs about these ques­tions em­piri­cally do and/​or log­i­cally should re­late to each other and to strate­gic views and choices

    • This could po­ten­tially be vi­su­ally “mapped”, per­haps with a similar style to that used in this post.

    • This could also in­clude ex­pert elic­i­ta­tion or other sys­tem­atic col­lec­tion of data on ac­tual be­liefs and de­ci­sions. That would also have the sep­a­rate benefit of pro­vid­ing one “out­side view”, which could be used as in­put into what one “should” be­lieve about these ques­tions.

  • At­tempts to build for­mal mod­els of what one should be­lieve or do, or how the fu­ture is likely to go, based on var­i­ous be­liefs about these questions

    • Ideally, it would be pos­si­ble for read­ers to provide their own in­puts and see what the re­sults “should” be

Such work could be done as stan­dalone out­puts, or sim­ply by mak­ing com­ment­ing on this post or the linked Google docs. Please also feel free to get in touch with us if you are look­ing to do any of the types of work listed above.

Acknowledgements

This post and the as­so­ci­ated doc­u­ments were based in part on ideas and ear­lier writ­ings by Justin Shov­e­lain and David Kristoffers­son, and benefit­ted from in­put from them. We re­ceived use­ful com­ments on a draft of this post from Ar­den Koehler, De­nis Drescher, and Gavin Tay­lor, and use­ful com­ments on the sec­tion on op­ti­mal timing from Michael Dick­ens, Phil Tram­mell, and Alex Hol­ness-Tofts. We’re also grate­ful to Jesse Lip­trap for work on an ear­lier draft, and to Siebe Rozen­dal for com­ments on an­other ear­lier draft. This does not im­ply these peo­ple’s en­dorse­ment of all as­pects of this post.


  1. Most of the ques­tions we cover are ac­tu­ally also rele­vant to peo­ple who are fo­cused on ex­is­ten­tial risk re­duc­tion for rea­sons un­re­lated to longter­mism (e.g., due to per­son-af­fect­ing ar­gu­ments, and/​or due to as­sign­ing suffi­ciently high cre­dence to near-term tech­nolog­i­cal trans­for­ma­tion sce­nar­ios). How­ever, for brevity, we will of­ten just re­fer to “longter­mists” or “longter­mism”. ↩︎

  2. Of course, some ques­tions about moral­ity are rele­vant even if longter­mism is taken as a start­ing as­sump­tion. This in­cludes ques­tions about how im­por­tant re­duc­ing suffer­ing is rel­a­tive to in­creas­ing hap­piness, and how much moral sta­tus var­i­ous be­ings should get. Thus, we will touch on such ques­tions, and link to some rele­vant sources. But we’ve de­cided to not in­clude such ques­tions as part of the core fo­cus of this post. ↩︎

  3. For ex­am­ple, we get as fine-grained as “How likely is coun­terforce vs. coun­ter­value tar­get­ing [in a nu­clear war]?”, but not as fine-grained as “Which pre­cise cities will be tar­geted in a nu­clear war?” We ac­knowl­edge that there’ll be some ar­bi­trari­ness in our de­ci­sions about how fine-grained to be. ↩︎

  4. Some of these ques­tions are more rele­vant to peo­ple who haven’t (yet) ac­cepted longter­mism, rather than to longter­mists. But all of these ques­tions can be rele­vant to cer­tain strate­gic de­ci­sions by longter­mists. See the linked Google doc for fur­ther dis­cus­sion. ↩︎

  5. See also our Database of ex­is­ten­tial risk es­ti­mates. ↩︎

  6. This cat­e­gory of strate­gies for in­fluenc­ing the fu­ture could in­clude work aimed to­wards shift­ing some prob­a­bil­ity mass from “ok” fu­tures (which don’t in­volve ex­is­ten­tial catas­tro­phes) to es­pe­cially ex­cel­lent fu­tures, or shift­ing some prob­a­bil­ity mass from es­pe­cially awful ex­is­ten­tial catas­tro­phes to some­what “less awful” ex­is­ten­tial catas­tro­phes. We plan to dis­cuss this cat­e­gory of strate­gies more in an up­com­ing post. We in­tend this cat­e­gory to con­trast with strate­gies aimed to­wards shift­ing prob­a­bil­ity mass from “some ex­is­ten­tial catas­tro­phe oc­curs” to “no ex­is­ten­tial catas­tro­phe oc­curs” (i.e., most ex­is­ten­tial risk re­duc­tion work). ↩︎

  7. This in­cludes things like how likely “ok” fu­tures are rel­a­tive to es­pe­cially ex­cel­lent fu­tures, and how likely es­pe­cially awful ex­is­ten­tial catas­tro­phes are rel­a­tive to some­what “less awful” ones. ↩︎

  8. This is about al­tru­ism in a gen­eral sense (i.e., con­cern for the wellbe­ing of oth­ers), not just EA speci­fi­cally. ↩︎

  9. This refers to ac­tions that speed de­vel­op­ment up in a gen­eral sense, or that “merely” change when things hap­pen. This should be dis­t­in­guished from chang­ing which de­vel­op­ments oc­cur, or differ­en­tially ad­vanc­ing some de­vel­op­ments rel­a­tive to oth­ers. ↩︎

  10. Biorisk in­cludes both nat­u­ral pan­demics and pan­demics in­volv­ing syn­thetic biol­ogy. Thus, this risk does not com­pletely be­long in the sec­tion on “emerg­ing tech­nolo­gies”. We in­clude it here any­way be­cause an­thro­pogenic biorisk will be our main fo­cus in this sec­tion, given that it’s the main fo­cus of the longter­mist com­mu­nity and that there are strong ar­gu­ments that it poses far greater ex­is­ten­tial risk than nat­u­ral pan­demics do (see e.g. The Precipice). ↩︎