Crucial questions for longtermists
This post was written for Convergence Analysis. It introduces a collection of âcrucial questions for longtermistsâ: important questions about the best strategies for improving the long-term future. This collection is intended to serve as an aide to thought and communication, a kind of research agenda, and a kind of structured reading list.
Introduction
The last decade saw substantial growth in the amount of attention, talent, and funding flowing towards existential risk reduction and longtermism. There are many different strategies, risks, organisations, etc. to which these resources could flow. How can we direct these resources in the best way? Why were these resources directed as they were? Are people able to understand and critique the beliefs underlying various viewsâincluding their ownâregarding how best to put longtermism into practice?
Relatedly, the last decade also saw substantial growth in the amount of research and thought on issues important to longtermist strategies. But this is scattered across a wide array of articles, blogs, books, podcasts, videos, etc. Additionally, these pieces of research and thought often use different terms for similar things, or donât clearly highlight how particular beliefs, arguments, and questions fit into various bigger pictures. This can make it harder to get up to speed with, form independent views on, and collaboratively sculpt the vast landscape of longtermist research and strategy.
To help address these issues, this post collects, organises, highlights connections between, and links to sources relevant to a large set of the âcrucial questionsâ for longtermists.[1] These are questions whose answers might be âcrucial considerationsââthat is, considerations which are âlikely to cause a major shift of our view of interventions or areasâ.
We collect these questions into topics, and then progressively then progressively break âtop-level questionsâ down into the lower-level âsub-questionsâ that feed into them. For example, the topic âOptimal timing of work and donationsâ includes the top-level question âHow will âleverage over the futureâ change over time?â, which is broken down into (among other things) âHow will the neglectedness of longtermist causes change over time?â We also link to Google docs containing many relevant links and notes.
What kind of questions are we including?
The post A case for strategy research visualised the âresearch spine of effective altruismâ as follows:
This post can be seen as collecting questions relevant to the âstrategyâ level.
One could imagine a version of this post that âzooms outâ to discuss crucial questions on the âvaluesâ level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.[2]
One could also imagine a version of this post that âzooms inâ on one specific topic we provide only a high-level view of, and that discusses that in more detail than we do. This could be considered to be work on âtacticsâ, or on âstrategyâ within some narrower domain. An example of something like that is the post Clarifying some key hypotheses in AI alignment. That sort of work is highly valuable, and weâll provide many links to such work. But the scope of this post itself will be restricted to the relatively high-level questions, to keep the post manageable and avoid readers (or us) losing sight of the forest for the trees.[3]
Finally, weâre mostly focused on:
Questions about which different longtermists have different beliefs, with those beliefs playing an explicit role in their strategic views and choices
Questions about which some longtermists think learning more or changing their beliefs would change their strategic views and choices
Questions which it appears some longtermists havenât noticed at all, the noticing of which might influence those longtermistsâ strategic views and choices
These can be seen as questions that reveal a âdouble cruxâ that explains the different strategies of different longtermists. We thus exclude questions about which practically, or by definition, all longtermists agree.
A high-level overview of the crucial questions for longtermists
Here we provide our current collection and structuring of crucial questions for longtermists. The linked Google docs contain some further information and a wide range of links to relevant sources, and I intend to continue adding new links in those docs for the foreseeable future.
âBig pictureâ questions (i.e., not about specific technologies, risks, or risk factors)
See here for notes and links related to these topics.
-
Value of, and best approaches to, existential risk reduction
-
How âgoodâ might the future be, if no existential catastrophe occurs?[4]
-
What is the possible scale of the human-influenced future?
-
What is the possible duration of the human-influenced future?
-
What is the possible quality of the human-influenced future?
How does the âdifficultyâ or âcostâ of creating pleasure vs. pain compare?
-
Can and will we expand into space? In what ways, and to what extent? What are the implications?
Will we populate colonies with (some) nonhuman animals, e.g. through terraforming?
-
Can and will we create sentient digital beings? To what extent? What are the implications?
Would their experiences matter morally?
Will some be created accidentally?
-
-
How âbadâ would the future be, if an existential catastrophe occurs? How does this differ between different existential catastrophes?
How likely is future evolution of moral agents or patients on Earth, conditional on (various different types of) existential catastrophe? How valuable would that future be?
How likely is it that our observable universe contains extraterrestrial intelligence (ETI)? How valuable would a future influenced by them rather than us be?
-
How high is total existential risk? How will the risk change over time?[5]
-
Where should we be on the ânarrow vs. broadâ spectrum of approaches to existential risk reduction?
-
To what extent will efforts focused on global catastrophic risks, or smaller risks, also help with existential risks?
-
-
Value of, and best approaches to, improving aspects of the future other than whether an existential catastrophe occurs[6]
-
What probability distribution over various trajectories of the future should we expect?[7]
How good have trajectories been in the past?
How close to the appropriate size should we expect influential agentsâ moral circles to be âby defaultâ?
How much influence should we expect altruism to have on future trajectories âby defaultâ?[8]
How likely is it that self-interest alone would lead to good trajectories âby defaultâ?
-
How does speeding up development affect the expected value of the future?[9]
-
How does speeding up development affect existential risk?
-
How does speeding up development affect astronomical waste? How much should we care?
With each year that passes without us taking certain actions (e.g., beginning to colonise space), what amount or fraction of resources do we lose the ability to ever use?
How morally important is losing the ability to ever use that amount or fraction of resources?
-
How does speeding up development affect other aspects of our ultimate trajectory?
-
-
What are the best actions for speeding up development? How good are they?
-
Other than speeding up development, what are the best actions for improving aspects of the future other than whether an existential catastrophe occurs? How valuable are those actions?
How valuable are various types of moral advocacy? What are the best actions for that?
-
How âcluelessâ are we?
-
Should we find claims of convergence between effectiveness for near-term goals and effectiveness for improving aspects of the future other than whether an existential catastrophe occurs âsuspiciousâ? If so, how suspicious?
-
-
Value of, and best approaches to, work related to âotherâ, unnoticed, and/âor unforeseen risks, interventions, causes, etc.
-
What are some plausibly important risks, interventions, causes, etc. that arenât mentioned in the other âcrucial questionsâ? How should the answer change our strategies (if at all)?
-
How likely is it that there are important unnoticed and/âor unforeseen risks, interventions, causes, etc.? What should we do about that?
How often have we discovered new risks, interventions, causes, etc. in the past? How is that rate changing over time? What can be inferred from that?
How valuable is âhorizon-scanningâ? What are the best approaches to that?
-
-
Optimal timing for work/âdonations
-
How will âleverage over the futureâ change over time?
-
What should be our prior regarding how leverage over the future will change? What does the âoutside viewâ say?
-
How will our knowledge about what we should do change over time?
-
How will the neglectedness of longtermist causes change over time?
-
What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
-
Are we biased towards thinking the leverage over the future is currently unusually high? If so, how biased?
How often have people been wrong about such things in the past?
-
If leverage over the future is higher at a later time, would longtermists notice?
-
-
How effectively can we âpunt to the futureâ?
What would be the long-term growth rate of financial investments?
What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?
What would be the long-term âgrowth rateâ from other punting activities?
Would the people weâd be punting to act in ways weâd endorse?
-
Which âdirectâ actions might have compounding positive impacts?
-
Do marginal returns to âdirect workâ done within a given time period diminish? If so, how steeply?
-
-
Tractability of, and best approaches to, estimating, forecasting, and investigating future developments
-
How good are people at forecasting future developments in general?
-
How good are people at forecasting impacts of technologies?
How often do people over- vs. underestimate risks from new tech? Should we think we might be doing that?
-
-
What are the best methods for forecasting future developments?
-
-
Value of, and best approaches to, communication and movement-building
-
When should we be concerned about information hazards? How concerned? How should we respond?
-
When should we have other concerns or reasons for caution about communication? How should we respond?
-
What are the pros and cons of expanding longtermism-relevant movements in various ways?
What are the pros and cons of people who lack highly relevant skills being included in longtermism-relevant movements?
What are the pros and cons of people who donât work full-time on relevant issues being included in longtermism-relevant movements?
-
-
Comparative advantage of longtermists
-
How much impact should we expect longtermists to be able to have as a result of being more competent than non-longtermists? How does this vary between different areas, career paths, etc.?
Generally speaking, how competent, âsaneâ, âwiseâ, etc. are existing society, elites, âexpertsâ, etc?
-
How much impact should we expect longtermists to be able to have as a result of having âbetter values/âgoalsâ than non-longtermists? How does this vary between different areas, career paths, etc.?
Generally speaking, how aligned with âgood values/âgoalsâ (rather than with worse values, local incentives, etc.) are the actions of existing society, elites, âexpertsâ, etc.?
-
Questions about emerging technologies
See here for notes and links related to these topics.
-
Value of, and best approaches to, work related to AI
-
Is it possible to build an artificial general intelligence (AGI) and/âor transformative AI (TAI) system? Is humanity likely to do so?
-
What form(s) is TAI likely to take? What are the implications of that? (E.g., AGI agents vs comprehensive AI services)
-
What will the timeline of AI developments be?
-
How âhardâ are various AI developments?
-
How much âeffortâ will go into various AI developments?
-
How discontinuous will AI development be?
Will development to human-level AI be discontinuous? How much so?
Will development from human-level AI be discontinuous? How much so?
Will there be a hardware overhang? How much would that change things?
-
How important are individual insights and âlumpyâ developments?
-
Will we know when TAI is coming soon? How far in advance? How confidently?
-
What are the relevant past trends? To what extent should we expect them to continue?
-
-
How much should longtermistsâ prioritise AI?
-
How high is existential risk from AI?
-
How âhardâ is AI safety?
How âhardâ are non-impossible technical problems in general?
To what extent can we infer that the problem is hard from failure or challenges thus far?
-
Should we expect people to handle AI safety and governance issues adequately without longtermist intervention?
To what extent will âsafetyâ problems be solved simply in order to increase âcapabilityâ or âeconomic usefulnessâ?
Would there be clearer evidence of AI risk in future, if itâs indeed quite risky? Will that lead to better behaviours regarding AI safety and governance?
-
Could AI pose suffering risks? Is it the most likely source of such risks?
-
How likely are positive or negative ânon-existential trajectory changesâ as a result of AI-related events? To what extent does that mean longtermists should prioritise AI?
-
-
What forms might an AI catastrophe take? How likely is each?
-
What are the best approaches to reducing AI risk or increasing AI benefits?
-
From a longtermist perspective, how valuable are approaches focused on relatively ânear-termâ or âless extremeâ AI issues?
-
What downside risks might (various forms of) work to reduce AI risk have? How big are those downside risks?
How likely is it that (various forms of) work to reduce AI risk would accelerate the development of AI? Would that increase overall existential risk?
-
How important is AI governance/âstrategy/âpolicy work? Which types are most important, and why?
-
-
-
Value of, and best approaches to, work related to biorisk[10] and biotechnology
-
What will the timeline of biotech developments be?
How âhardâ are various biotech developments?
How much âeffortâ will go into various biotech developments?
-
How much should longtermistsâ prioritise biorisk and biotech?
-
How high is existential risk from pandemics involving synthetic biology?
Should we be more concerned about accidental or deliberate creation of dangerous pathogens? Should we be more concerned about accidental or deliberate release? What kinds of actors should we be most concerned about?
-
How high is existential risk from naturally arising pandemics?
To what extent does the usual ânatural risks must be lowâ argument apply to natural pandemics?
-
What can we (currently) learn from previous pandemics, near misses, etc.?
-
How high is the risk from antimicrobial resistance?
-
-
How much overlap is there between approaches focused on natural vs. anthropogenic pandemics, âregularâ vs. âextremeâ risks, etc.?
-
What are the best approaches to reducing biorisk?
What downside risks might (various forms of) work to reduce biorisk have? How big are those downside risks?
-
-
Value of, and best approaches to, work related to nanotechnology
-
What will the timeline of nanotech developments be?
How âhardâ are various nanotech developments?
How much âeffortâ will go into various nanotech developments?
-
How high is the existential risk from nanotech?
-
What are the best approaches to reducing risks from nanotechnology?
What downside risks might (various forms of) work to reduce risks from nanotech have? How big are those downside risks?
-
-
Value of, and best approaches to, work related to interactions and convergences between different emerging technologies
Questions about specific existential risks (which werenât covered above)
See here for notes and links related to these topics.
-
Value of, and best approaches to, work related to nuclear weapons
-
How high is the existential risk from nuclear weapons?
-
How likely are various types of nuclear war?
What countries would most likely be involved in a nuclear war?
How many weapons would likely be used in a nuclear war?
How likely is counterforce vs. countervalue targeting?
How likely are accidental launches?
How likely is escalation from accidental launch to nuclear war?
-
How likely are various severities of nuclear winter (given a certain type and severity of nuclear war)?
-
What would be the impacts of various severities of nuclear winter?
-
-
-
Value of, and best approaches to, work related to climate change
-
How high is the existential risk from climate change itself (not from geoengineering)?
How much climate change is likely to occur?
What would be the impacts of various levels of climate change?
How likely are various mechanisms for runaway/âextreme climate change?
-
How tractable and risky are various forms of geoengineering?
How likely is it that risky geoengineering could be unilaterally implemented?
-
How much does climate change increase other existential risks?
-
-
Value of, and best approaches to, work related to totalitarianism and dystopias
-
How high is the existential risk from totalitarianism and dystopias?
How likely is the rise of a global totalitarian or dystopian regime?
How likely is it that a global totalitarian or dystopian regime that arose would last long enough to represent or cause an existential catastrophe?
-
Which political changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?
Would various shifts towards world government or global political cohesion increase risks from totalitarianism and dystopia? By how much? Would those shifts reduce other risks?
Would enhanced or centralised state power increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
-
Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?
Would further development or deployment of surveillance technology increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
Would further development or deployment of AI for police or military purposes increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
Would further development or deployment of genetic engineering increase risks from totalitarianism and dystopia? By how much? Would it reduce other risks?
Would further development or deployment of other technologies for influencing/âcontrolling peopleâs values increase risks from totalitarianism and dystopia? By how much?
Would further development or deployment of life extension technologies increase risks from totalitarianism and dystopia? By how much?
-
Questions about non-specific risks, existential risk factors, or existential security factors
See here for notes and links related to these topics.
-
Value of, and best approaches to, work related to global catastrophes and/âor civilizational collapse
-
How much should we be concerned by possible concurrence, combinations, or cascades of catastrophes?
-
How much worse in expectation would a global catastrophe make our long-term trajectory?
How effectively, if at all, would a global catastrophe serve as a warning shot?
What can we (currently) learn from previous global catastrophes (or things that came close to being global catastrophes)?
-
How likely is collapse, given various intensities of catastrophe?
How resilient is society?
-
How likely would a collapse make each of the following outcomes: Extinction; permanent stagnation; recurrent collapse; âscarredâ recovery; full recovery?
-
Whatâs the minimum viable human population (from the perspective of genetic diversity)?
-
How likely is economic and technological recovery from collapse?
What population size is required for economic specialisation, technological development, etc.?
-
Might we have a âscarredâ recovery, in which our long-term trajectory remains worse in expectation despite economic and technological recovery? How important is this possibility?
-
What can we (currently) learn from previous collapses of specific societies, or near-collapses?
-
-
What are the best approaches for improving mitigation of, resilience to, and recovery from global catastrophes and/âor collapse (rather than preventing them)? How valuable are these approaches?
(How much) Should we worry about âmoral hazardâ?
(How much) Should we worry about âwhich world gets savedâ?
-
-
Value of, and best approaches to, work related to war
-
By how much does the possibility of various levels/âtypes of wars raise total existential risk?
-
How likely are wars of various levels/âtypes?
How likely are great power wars?
-
-
By how much do wars of various levels/âtypes increase existential risk?
By how much do great power wars increase existential risk?
-
-
Value of, and best approaches to, work related to improving institutions and/âor decision-making
-
Value of, and best approaches to, work related to existential security and the Long Reflection
Can we achieve existential security? How?
Are there downsides to pursuing existential security? If so, how large are they?
How important is it that we have a Long Reflection process? What should such a process involve? How can we best prepare for and set up such a process?
We have also collected here some questions that seem less important, or where itâs not clear that thereâs really disagreement on them that fuels differences in strategic views and choices among longtermists. These include questions about ânaturalâ risks (other than ânaturalâ pandemics, which some of the above questions already addressed).
Directions for future work
Weâll soon publish a post discussing in more depth the topic of optimal timing for work and donations (update: posted). Weâd also be excited to see future work which:
-
Provides that sort of more detailed discussion for other topics raised in this post
-
Attempts to actually answer some of these questions, or to at least provide relevant arguments, evidence, etc.
-
Identifies additional crucial questions
-
Highlights additional relevant references
-
Further discusses how beliefs about these questions empirically do and/âor logically should relate to each other and to strategic views and choices
This could potentially be visually âmappedâ, perhaps with a similar style to that used in this post.
This could also include expert elicitation or other systematic collection of data on actual beliefs and decisions. That would also have the separate benefit of providing one âoutside viewâ, which could be used as input into what one âshouldâ believe about these questions.
-
Attempts to build formal models of what one should believe or do, or how the future is likely to go, based on various beliefs about these questions
Ideally, it would be possible for readers to provide their own inputs and see what the results âshouldâ be
Such work could be done as standalone outputs, or simply by making commenting on this post or the linked Google docs. Please also feel free to get in touch with us if you are looking to do any of the types of work listed above.
Acknowledgements
This post and the associated documents were based in part on ideas and earlier writings by Justin Shovelain and David Kristoffersson, and benefitted from input from them. We received useful comments on a draft of this post from Arden Koehler, Denis Drescher, and Gavin Taylor, and useful comments on the section on optimal timing from Michael Dickens, Phil Trammell, and Alex Holness-Tofts. Weâre also grateful to Jesse Liptrap for work on an earlier draft, and to Siebe Rozendal for comments on another earlier draft. This does not imply these peopleâs endorsement of all aspects of this post.
- âŠď¸
Most of the questions we cover are actually also relevant to people who are focused on existential risk reduction for reasons unrelated to longtermism (e.g., due to person-affecting arguments, and/âor due to assigning sufficiently high credence to near-term technological transformation scenarios). However, for brevity, we will often just refer to âlongtermistsâ or âlongtermismâ.
- âŠď¸
Of course, some questions about morality are relevant even if longtermism is taken as a starting assumption. This includes questions about how important reducing suffering is relative to increasing happiness, and how much moral status various beings should get. Thus, we will touch on such questions, and link to some relevant sources. But weâve decided to not include such questions as part of the core focus of this post.
- âŠď¸
For example, we get as fine-grained as âHow likely is counterforce vs. countervalue targeting [in a nuclear war]?â, but not as fine-grained as âWhich precise cities will be targeted in a nuclear war?â We acknowledge that thereâll be some arbitrariness in our decisions about how fine-grained to be.
- âŠď¸
Some of these questions are more relevant to people who havenât (yet) accepted longtermism, rather than to longtermists. But all of these questions can be relevant to certain strategic decisions by longtermists. See the linked Google doc for further discussion.
- âŠď¸
See also our Database of existential risk estimates.
- âŠď¸
This category of strategies for influencing the future could include work aimed towards shifting some probability mass from âokâ futures (which donât involve existential catastrophes) to especially excellent futures, or shifting some probability mass from especially awful existential catastrophes to somewhat âless awfulâ existential catastrophes. We plan to discuss this category of strategies more in an upcoming post. We intend this category to contrast with strategies aimed towards shifting probability mass from âsome existential catastrophe occursâ to âno existential catastrophe occursâ (i.e., most existential risk reduction work).
- âŠď¸
This includes things like how likely âokâ futures are relative to especially excellent futures, and how likely especially awful existential catastrophes are relative to somewhat âless awfulâ ones.
- âŠď¸
This is about altruism in a general sense (i.e., concern for the wellbeing of others), not just EA specifically.
- âŠď¸
This refers to actions that speed development up in a general sense, or that âmerelyâ change when things happen. This should be distinguished from changing which developments occur, or differentially advancing some developments relative to others.
- âŠď¸
Biorisk includes both natural pandemics and pandemics involving synthetic biology. Thus, this risk does not completely belong in the section on âemerging technologiesâ. We include it here anyway because anthropogenic biorisk will be our main focus in this section, given that itâs the main focus of the longtermist community and that there are strong arguments that it poses far greater existential risk than natural pandemics do (see e.g. The Precipice).
- A cenÂtral diÂrecÂtory for open reÂsearch questions by 19 Apr 2020 23:47 UTC; 163 points) (
- Notes on EA-reÂlated reÂsearch, writÂing, testÂing fit, learnÂing, and the Forum by 27 Mar 2021 9:52 UTC; 98 points) (
- Why we may exÂpect our sucÂcesÂsors not to care about suffering by 10 Jul 2023 13:54 UTC; 63 points) (
- My perÂsonal cruxes for foÂcusÂing on exÂisÂtenÂtial risks /â longterÂmism /â anyÂthing other than just video games by 13 Apr 2021 5:50 UTC; 55 points) (
- CruÂcial quesÂtions about opÂtiÂmal timing of work and donations by 14 Aug 2020 8:43 UTC; 45 points) (
- Should marginal longterÂmist donaÂtions supÂport funÂdaÂmenÂtal or inÂterÂvenÂtion reÂsearch? by 30 Nov 2020 1:10 UTC; 43 points) (
- MisÂcelÂlaÂneous & Meta X-Risk Overview: CERI SumÂmer ReÂsearch Fellowship by 30 Mar 2022 2:45 UTC; 39 points) (
- Why EAs reÂsearchÂing mainÂstream topÂics can be useful by 13 Jun 2021 10:14 UTC; 37 points) (
- EA UpÂdates for AuÂgust 2020 by 28 Aug 2020 10:29 UTC; 34 points) (
- 24 Feb 2020 17:51 UTC; 29 points) 's comment on MichaelAâs Quick takes by (
- 18 Aug 2020 17:36 UTC; 26 points) 's comment on The case of the missÂing cause priÂoriÂtiÂsaÂtion research by (
- 9 Jun 2021 6:19 UTC; 15 points) 's comment on SurÂvey on AI exÂisÂtenÂtial risk scenarios by (
- Notes on effecÂtive-alÂtruÂism-reÂlated reÂsearch, writÂing, testÂing fit, learnÂing, and the EA Forum by 28 Mar 2021 23:43 UTC; 14 points) (LessWrong;
- The (short) case for preÂdictÂing what Aliens value by 20 Jul 2023 15:25 UTC; 14 points) (LessWrong;
- ExÂtincÂtion risk reÂducÂtion and moral cirÂcle exÂpanÂsion: SpecÂuÂlatÂing susÂpiÂcious convergence by 4 Aug 2020 11:38 UTC; 12 points) (
- 26 Aug 2020 6:57 UTC; 12 points) 's comment on EA OrÂgaÂniÂzaÂtion UpÂdates: July 2020 by (
- 19 Oct 2022 21:52 UTC; 11 points) 's comment on AnÂnouncÂing SquigÂglepy, a Python packÂage for Squiggle by (
- 30 Jul 2020 1:24 UTC; 10 points) 's comment on ComÂmon ground for longtermists by (
- 30 Jan 2021 3:26 UTC; 8 points) 's comment on AMA: Ajeya CoÂtra, reÂsearcher at Open Phil by (
- 1 Aug 2020 2:25 UTC; 7 points) 's comment on Are we neÂglectÂing edÂuÂcaÂtion? PhilosÂoÂphy in schools as a longterÂmist area by (
- 16 Mar 2022 1:26 UTC; 6 points) 's comment on MichaelAâs Quick takes by (
- 16 Dec 2020 6:26 UTC; 6 points) 's comment on Ask ReÂthink PriÂoriÂties AnyÂthing (AMA) by (
- 11 Apr 2021 7:19 UTC; 5 points) 's comment on The ImÂporÂtance of ArÂtifiÂcial Sentience by (
- 27 Dec 2020 3:25 UTC; 4 points) 's comment on What is the likeÂliÂhood that civÂiÂlizaÂtional colÂlapse would diÂrectly lead to huÂman exÂtincÂtion (within decades)? by (
- 26 Sep 2021 18:33 UTC; 4 points) 's comment on List of AI safety courses and resources by (
- 29 Jan 2021 4:58 UTC; 3 points) 's comment on ImÂporÂtant Between-Cause ConÂsidÂerÂaÂtions: things evÂery EA should know about by (
- 29 Jan 2021 7:16 UTC; 3 points) 's comment on ImÂporÂtant Between-Cause ConÂsidÂerÂaÂtions: things evÂery EA should know about by (
- 6 Aug 2020 0:05 UTC; 3 points) 's comment on What do we do if AI doesnât take over the world, but still causes a sigÂnifiÂcant global probÂlem? by (
- 22 Sep 2021 9:20 UTC; 3 points) 's comment on MichaelAâs Quick takes by (
- 6 Sep 2020 12:19 UTC; 3 points) 's comment on ConÂcrete proÂject lists by (
- 9 Jan 2022 10:45 UTC; 3 points) 's comment on A guided cause priÂoriÂtiÂsaÂtion flowchart by (
- 3 Aug 2020 0:29 UTC; 3 points) 's comment on Are we neÂglectÂing edÂuÂcaÂtion? PhilosÂoÂphy in schools as a longterÂmist area by (
- 11 Apr 2021 12:04 UTC; 3 points) 's comment on On the longterÂmist case for workÂing on farmed anÂiÂmals [UncerÂtainÂties & reÂsearch ideas] by (
- 19 May 2021 15:59 UTC; 2 points) 's comment on MichaelAâs Quick takes by (
Thanks for writing this post! I enjoyed looking over these, many of which I have also been puzzling about.
After seeing this question picked up here I thought I would share some quick thoughts from the perspective of a person with a population biology/âevolution background. I think this is a reasonable question to ask, but I suspect is not as important as the other factors that go into the broader question of what is the minimum population size from which humanity is likely to recover, period. Genetics are just one factor and probably not the most important when we consider the probability of recovery after a severe drop in global population.
Suppose that after some catastrophic event the population of humanity has suddenly dropped to a much smaller and more fragmented global population, e.g. 10000 individuals scattered in ~100 groups of 100 each across the globe. While the population size is small, it will be particularly susceptible to going extinct due to random fluctuations in population size. The population size could remain stationary or gradually decline, until eventually a random event causes extinction. Or it could start increasing, until eventually it is large enough to be robust to extinction from a random event.
The idea of a minimum viable population size (MVP) from a purely genetic perspective is that, since small populations are predicted to have lower average genetic fitness due to an increase in the expression of recessive deleterious mutations (âinbreeding depressionâ), an increased fixation of deleterious mutations in the population, or a lack of genetic variation that would allow adaptation to environment, there is in theory a population size small enough where a population would decline and go extinct due to low genetic fitness.
But in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.
Another way in which the concept of a MVP is too simplistic is that it is defined with respect to a genetic âequilibriumââit assumes that conditions have been stable enough that there is a constant level of genetic variation in the population. However, after a sudden population decline, we would be far from equilibriumâwe would still have lots of genetic variation from the time the population was large. This variation would start to decay, but as different local populations become fixed for different variants, much of this variation would be maintained at the global level and could be converted back into local variation by small amounts of migration. Such considerations are not usually included in MVP considerations. (Some collaborators and I have written about this last point at it relates to conserving endangered species here)
Perhaps we should keep the term âminimum viable population sizeâ but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.
Very interesting, thanks! Strong upvoted.
This matches what I had tentatively believed before seeing your commentâi.e., I had suspected that genetic diversity wasnât among the very most important considerations when modelling odds of recovery from collapse. So Iâve now updated to more confidence in that view.
I raised MVP (from a genetic perspective) just as one of many considerations, and primarily because Iâd seen it mentioned in The Precipice. (Well, Ord doesnât make it 100% clear that heâs just talking about MVP from a genetic perspective, but the surrounding text suggests he is. Hanson also devotes two paragraphs to the topic, again alongside other considerations.)
Iâd agree that clarifying what one means is important. This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other âaspectsâ of MVP, I also have âWhat population size is required for economic specialisation, technological development, etc.?â
It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation, as long as they make it clear that thatâs what theyâre doing. Indeed, I do the latter myself here: I write there that a seemingly important parameter for modelling odds of recovery is âMinimum viable population for sufficient specialisation to maintain industrialised societies, scientific progress, etc.â
I wasnât aware of these points; thanks for sharing them :)
Thanks for your response and the link to your newer post and the Ord and Hanson refs. Iâll just add a thought I had while reading
This all makes sense, but sounds to me like to be at risk of leaving out the population/âconservation biology perspective (beyond genetic considerations). A large part of what motivated me to write my original post is that I do think it is indeed valuable to use frameworks from population and conservation biology to study human extinction risk - but it is important to include all factors identified in those fields as being important; namely, environmental and demographic stochasticity, as well as habitat fragmentation and degradation, which could pose much greater risks than inbreeding and genetic drift.
Yeah, that sounds right. Those factors were left out just because I didnât think of including them (because I donât know very much about these frameworks from population and conservation biology), rather than because I explicitly decided to include them, and Iâd guess youâre right that attending to those factors and using those frameworks would be useful. So thanks for highlighting this :)
There are probably also various other âcrucial questionsâ people could highlight, as well as questions that would fit under these questions and get more into the fine-grained details, and Iâd encourage people to comment here, comment in the google doc, or create their own documents to highlight those things. (I say this partly because this post has a very broad scope, so a vast array of fields will have relevant knowledge, and I of course have very limited knowledge of most of those fields.)
This is really fantastic, and seems like there is a project that could be done as a larger collaboration, building off of this post.
It would be a significant amount of additional work, but it seems very valuable to list resources relevant to each questionâespecially as some seem important, but have been partly addressed. (For example, re: estimates of natural pandemic risks, see my paper, and then Andrew Snyder-Beattieâs paper.)
Given that, would you be interested in having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document?
Thanks for the comment! I definitely agree that listing relevant resources would be useful, as would allowing people to collaborate on that, and in fact weâve already done so! The links to relevant resources can be found in the Google docs linked to in each place where it says âSee here for notes and links related to these topics.â
I actually already had a link to your paper in the Google doc section on naturally arising pandemics. Though I didnât have the Snyder-Beattie paper there, so thanks for mentioning thatâIâve now added it.
Iâd definitely encourage people to comment on those Google docs to suggest additional resources, questions, points about implications, etc.
I hadnât really thought of making this overview article itself an editable Google doc, but it seems possible thatâd be useful, so hereâs the link to what was the draft of this post. People can feel free to continue to make comments there (or here), and I may make some changes to this post in response.
Did you have something different/âmore than that in mind when you said âhaving this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda documentâ?
Also, as more general points:
I definitely imagine there could be useful further collaborations building off this project (beyond just suggesting more resources and questions). And Iâd guess that I and/âor Convergence would be happy with to work/âtalk with people on that (though Iâm not speaking for Convergence when I say that).
I think making collaboratively editable Google docs of things is often a great move (this was part of the motivation for my central directory of open research questions and my database of existential risk estimates)
This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!
The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.
It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermistâs answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.
Looking forward to seeing more work on this area!
I used to think similarly, but now am more skeptical about quantitative information on longtermistsâ beliefs.
[ETA: On a second reading, maybe the tone of this comment is too negative. I still think there is value in some surveys, specifically if they focus on a small number of carefully selected questions for a carefully selected audience. Whereas before my view had been closer to âthere are many low-hanging fruits in the space of possible surveys, and doing even quickly executed versions of most surveys will have a lot of value.â]
Iâve run internal surveys on similar questions at both FRI (now Center on Longterm Risk) and the Future of Humanity Institute. Iâve found it very hard to draw any object-level conclusions from the results, and certainly wouldnât feel comfortable for the results to directly influence personal or organizational goals. I feel like my main takeaways were:
Itâs very hard to figure out what exactly to ask about. E.g. how to operationalize different types of AI risk?
Even once youâve settled on some operationalization, people will interpret it differently. Itâs very hard to avoid this.
There usually is a very large amount of disagreement between people.
Based on my own experience of filling in such surveys and anecdotal feedback, Iâm not sure how much to trust the answers if at all. I think many people simply donât have stable views on the quantitative values one wants to ask about, and essentially âmake upâ an answer that may be mostly determined by psychological substitution.
(These are also sufficient reasons for why Iâve never published the results of such surveys, though sometimes there were also other reasons.)
On reflection, maybe this isnât that surprising: e.g. how to delineate different types of AI risk is an active topic of research, and people write long texts about it; some people have disagreed for years, and donât fully understand each othersâ views even though theyâve tried for dozens of hours. It would be fairly surprising if the ask to fill in a survey would make the fundamental uncertainty and confusion suggested by this background go away.
Thanks for sharing your thoughts. I feel uncertain about how valuable itâd be to collect quantitative info about peopleâs beliefs on questions like these, and your comment has provided useful a input/âperspective on that matter.
Some thoughts/âquestions in response:
Do you think itâs not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think itâs unclear that itâs net positive enough to justify the time required (from the survey organiser and from the respondents)?
Do you think such info doesnât even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?
Relatedly, I have an impression that people sometimes deny the value of quantitative estimates/âforecasts in general based on seeming to view us as simply either âuncertainâ or âcertainâ on a given matter (e.g., âweâll still have no idea at allâ). In contrast, I think we always have some but not complete uncertainty, and that we can often/âalways move closer to certainty by small increments.
That said, one can share that view of mine and yet think these estimates/âforecasts (or any other particular thing) donât help us move closer to certainty at all.
It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that theyâre things itâs valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?
E.g., I wouldnât be surprised if a large portion of people who donât work at places like FHI wouldnât realise that itâs hard to know how to even operationalise different types of AI risk, and would expect that people at FHI all agree pretty closely on some of these questions.
And I wouldnât be super surprised if even some people who do work at places like FHI thought operationalisations would be relatively easy, agreement would be pretty high, etc. Though I donât really know.
That said, there may be other, cheaper ways to spread those takeaways. E.g., perhaps, simply having a meeting where those points are discussed explicitly but qualitatively, and then releasing a statement on the matter.
Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? Iâd imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?
I ask this question genuinely rather than defensively. Iâm decently confident the database is net positive, but very uncertain about how positive, and open to the idea that itâs net negative.
Personally, I think itâs net positive but not worth the time investment in most cases. But based on feedback some other people think itâs net negative, at least when not executed exceptionally wellâmostly due to anchoring, projecting a sense of false confidence, risk of numbers being quoted out of context etc.
I think an idealized survey would reduce uncertainty a bit. But in practice I think itâs too hard to tell the signal apart from the noise, and so that it basically doesnât reduce object-level uncertainty at all. Iâm more positive about the results providing some high-level takeaways (e.g. âpeople disagree a lotâ) or identifying specific disagreements (e.g. âthese two people disagree a lot on that specific questionâ).
Yes, that sounds right to me. I think itâs a bit tricky to get the message right though. I think Iâd want to roughly convey a (more nuanced version of) âwe still need people who can think through questions themselves and form their own views, not just people who seek guidance from some consensus which on many questions may not existâ. (Buckâs post on deference and inside-view models is somewhat related.) But itâs tricky to avoid pessimistic/ânon-constructive impressions like âpeople have no idea what theyâre talking about, so we should stop giving any weight to themâ or âwe donât know anything and so canât do anything about improving the longterm futureâ.
I also do feel a bit torn about the implications myself. After all, the survey issues mostly indicate a failure of a specific way of making beliefs explicit, not necessarily a practical defect in those beliefs themselves. (Weird analogy: if you survey carpenters on weird questions about tables, maybe they also wonât give very useful replies, but they might still be great at building tables.) And especially if weâre pessimistic about the tractability of reducing confusion, then maybe advice along the lines of (e.g.) âtry to do useful AI safety work even if you canât give super clear justifications for what youâre doing and donât fully understand the views of many of your peersâ is among the best generic advice we can give, despite some remaining unease from people who are temperamentally maths/âanalytic philosopher types such as myself.
I think a database is valuable precisely because it shows a range of estimates, including the fact that different estimates sometimes diverge a lot.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly. But this is probably not among the top criteria Iâd use to pick research questions, and usually Iâd expect most of the value to come from other sources (e.g. identifying potential interventions/âsolutions, field building or other indirect effects, âŚ). The reason mostly is that Iâm skeptical marginal research will change âconsensus estimatesâ by enough that the change in the quantitative probability by itself will have practical consequences. E.g. I think it mostly doesnât matter for practical purposes if you think the risk of extinction from AI this century is, say, 8% or 10% (making up numbers, not my beliefs). If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable. But I donât think there is such a research project. (There are already both people whose credences are 0.1% and 10%, respectively, but the issue is they donât fully understand each other, disagree about how to interpret the evidence etc. - and additional research wouldnât significantly change this.)
Again, I do think there are various valuable research projects that would inform our views on how likely extinction from AI is, among other things. But Iâd expect most of the value to come from things other than moving that specific credence.
In any case, all of these things are very different from asking someone who hasnât done such research to fill in a survey. I think surveying more people on what their x-risk credences are will have ~zero or even negative epistemic value for the purpose of improving our x-risk estimates. Instead, weâd need to identify specific research questions, have people spend a long time doing the required research, and then ask those specific people. (So e.g. I think Ordâs estimate have positive epistemic value, and they also would if he stated them in a surveyâthe point is that this is because he has spent a lot of time deriving these specific estimates. But if you survey people, even longtermist researchers, most of them wonât have done such research, and even if they have lots of thoughts on relevant questions if you ask them to give a number they havenât previously derived with great care theyâll essentially âmake it upâ.)
Thanks, thatâs all really interesting.
I think I largely agree, except that I think Iâm on the fence about the last paragraph.
I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views?
I think my views are roughly as follows:
âMost relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of peopleâand a substantial portion of EAs, longtermists, policymakers, etc. - probably arenât aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting expertsâ views can provide a useful input into other peopleâs prioritisation decisions.
That said, on the margin, itâll be very hard to shift the relevant expertsâ credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisionsâe.g., tractability of or personal fit for interventions. In addition, we donât know how much weight to put on expertsâ specific credences anyway. So thereâs not that much value in trying to further inform the relevant expertsâ credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)â
Does that roughly match your views?
Just to check, I assume you mean that thereâd be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldnât cause such a revision otherwise?
One alternative thing you might mean: âI think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable.â But Iâm guessing thatâs not what you mean?
Yes, that sounds roughly right. I hadnât thought about the value for communicating with broader audiences.
Yes, thatâs what I meant.
(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I donât trust that number much.)
I propose two additions to this list:
Improving our understanding of consciousness /â conscious experience
Improving our neuroimaging capabilities
Without a solid theory of consciousness, our views about what matters will keep being based on moral intuitions and it will be hard to make progress on disputes.
[Unstructured, quickly written collection of reactions]
I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence.
Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/âworse in expectation. (See âWhich technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?â)
---
I think the main reason I didnât already include a question directly about consciousness is whatâs captured here:
Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/âlargely/âsomewhat an empirical rather than âvaluesâ/ââmoralâ matter. (One reason Iâm implicitly putting it partly in the âmoralâ bucket is that we might be most interested in something like âconsciousness of a morally relevant sortâ, such that our moral views influence which features weâre interested in investigating.)
---
After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might âfitâ. Here are the questions I saw that seemed related (though they donât directly address our understanding of consciousness):
Some related material in this blog post: How understanding valence could help make future AIs safer
Thanks for this!
fwiw I would definitely bucket consciousness research and neuroimaging under âstrategyâ, though agree that the bucketing is somewhat arbitrary.
I might add to this âif we could create sentient (conscious) digital beingsâ
Will they have valenced (positive/ânegative) experiences?
If so, (how) could we ever know which of their experiences were positive or negative?
I think this relates to the comment from @MichaelA1y
My shortform thoughts on this are HERE