This comment is again asking you to do most of the work, in the form of picking out which questions in that agenda are about the future and then operationalising them into crisp forecasting questions. But I’ll add as replies a sample of some questions from the agenda that I think it’d be cool to operationalise and put on Metaculus.
How likely are international tensions, armed conflicts of various levels/types, and great power war specifically at various future times? What are the causes of these things?
How might shifts in technology, climate, power, resource scarcity, migration, and economic growth affect the likelihood of war?
Are Pinker’s claims in The Better Angels of Our Nature essentially correct?
Are the current trends likely to hold in future? What might affect them?
How do international tensions, strategic competition, and risks of armed conflict affect the expected value of the long-term future? By what pathways?
What are the plausible ways a great power war could play out?
E.g., what countries would become involved? How much would it escalate? How long would it last? What types of technologies might be developed and/or used during it?
What are the main pathways by which international tensions, armed conflicts of various levels/types, or great power war specifically could increase (or decrease) existential risks? Possible examples include:
Spurring dangerous development and/or deployment of new technologies
Spurring dangerous deployment of existing technologies
Impeding existential risk reduction efforts (since those often require coordination and are global public goods)
Worsening (or improving) the values of various actors (e.g., reducing or increasing impartiality or inclinations towards multilateralism among the public or among political leaders)
Changing the international system’s global governance arrangements and/or polarity (which could then make coordination easier or harder, make stable authoritarianism more or less likely, etc.)
Serving as a “warning shot” that improves values, facilitates coordination, motivates risk reduction efforts, etc.
How might plausible changes in variables such as climate, resource scarcity, migration, urbanisation, population size, and economic growth affect answers to the above questions?
To what extent does this push in favour of or against work to affect those variables (e.g., climate change mitigation, open borders advocacy, improving macroeconomic policy)?
What are the best actions for intervening on international tensions, strategic competition, risks of armed conflict, or specifically the ways that these things might harm the long-term future?
What are the most cost-effective actions for achieving these goals?
In relation to international tensions, strategic competition, and risks of armed conflict in particular, we can also ask the following specific sub-questions:
How useful are things like diplomacy, treaties, arms control agreements, international organisations, and international norms? What actions are best in relation to those things?
What are the main pathways by which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
E.g., increasing the rate or severity of armed conflict; reducing the chance that humanity has (something approximating) a successful long reflection; increasing the chances of an unrecoverable dystopia.
Risk and security factors for (global, stable) authoritarianism
How much would each of the “risk factors for stable totalitarianism” reviewed by Caplan (2008) increase the risk of (global, stable) authoritarianism (if at all)?
How likely is the occurrence of each factor?
What other risk or security factors should we focus on?
What effects would those factors have on important outcomes other than authoritarianism? All things considered, is each factor good or bad for the long-term future?
E.g., mass surveillance, preventive policing, enhanced global governance, and/or world government might be risk factors from the perspective of authoritarianism but security factors from the perspective of extinction or collapse risks (see also Bostrom, 2019).
What are the best actions for influencing these factors?
How likely is it that relevant kinds of authoritarian regimes will emerge, spread (especially to become global), and/or persist (especially indefinitely)?
How politically and technologically feasible would this be?
Under what conditions would societies trend towards and/or maintain authoritarianism or a lack thereof?
What strategic, military, economic, and political advantages and disadvantages do more authoritarian regimes tend to have? How does this differ based on factors like the nature of the authoritarian regime, the size of the state/polity it governs, and the nature and size of its adversaries?
How likely is it that relevant actors will have the right motivations to bring this about?
How many current political systems seem to be trending towards authoritarianism?
How much (if at all) are existing authoritarian regimes likely to spread? How long are they likely to persist? Why?
How likely is it that any existing authoritarian regimes would spread globally and/or persist indefinitely? Why?
Typology of, likelihoods of, and interventions for dystopias
How likely is each type of dystopia to arise initially and then to persist indefinitely?
How bad would each type of unrecoverable dystopia be, relative to each other, to other existential catastrophes, and to other possible futures?
How much should we worry about recoverable or temporary equivalents of each type of unrecoverable dystopia?
E.g., how much would each increase (or decrease) the risk of later extinction, unrecoverable collapse, or unrecoverable dystopia?
What are the main factors affecting the likelihood, severity, and persistence of each type of dystopia?
What would be the best actions for reducing the likelihood, severity, or persistence of each type of dystopia?
I’d also love for someone to turn a bunch of questions from my draft Politics, Policy, and Security from a Broad Longtermist Perspective: A Preliminary Research Agenda into forecasting questions, and many would most naturally have horizons of >5 years.
This comment is again asking you to do most of the work, in the form of picking out which questions in that agenda are about the future and then operationalising them into crisp forecasting questions. But I’ll add as replies a sample of some questions from the agenda that I think it’d be cool to operationalise and put on Metaculus.
On armed conflict and military technology
How likely are international tensions, armed conflicts of various levels/types, and great power war specifically at various future times? What are the causes of these things?
How might shifts in technology, climate, power, resource scarcity, migration, and economic growth affect the likelihood of war?
Are Pinker’s claims in The Better Angels of Our Nature essentially correct?
Are the current trends likely to hold in future? What might affect them?
How do international tensions, strategic competition, and risks of armed conflict affect the expected value of the long-term future? By what pathways?
What are the plausible ways a great power war could play out?
E.g., what countries would become involved? How much would it escalate? How long would it last? What types of technologies might be developed and/or used during it?
What are the main pathways by which international tensions, armed conflicts of various levels/types, or great power war specifically could increase (or decrease) existential risks? Possible examples include:
Spurring dangerous development and/or deployment of new technologies
Spurring dangerous deployment of existing technologies
Impeding existential risk reduction efforts (since those often require coordination and are global public goods)
Sweeping aside or ushering in global governance arrangements
Weakening (or strengthening) democracies
Worsening (or improving) the values of various actors (e.g., reducing or increasing impartiality or inclinations towards multilateralism among the public or among political leaders)
Changing the international system’s global governance arrangements and/or polarity (which could then make coordination easier or harder, make stable authoritarianism more or less likely, etc.)
Serving as a “warning shot” that improves values, facilitates coordination, motivates risk reduction efforts, etc.
How might plausible changes in variables such as climate, resource scarcity, migration, urbanisation, population size, and economic growth affect answers to the above questions?
To what extent does this push in favour of or against work to affect those variables (e.g., climate change mitigation, open borders advocacy, improving macroeconomic policy)?
What are the best actions for intervening on international tensions, strategic competition, risks of armed conflict, or specifically the ways that these things might harm the long-term future?
What are the most cost-effective actions for achieving these goals?
In relation to international tensions, strategic competition, and risks of armed conflict in particular, we can also ask the following specific sub-questions:
How useful are things like diplomacy, treaties, arms control agreements, international organisations, and international norms? What actions are best in relation to those things?
On authoritarianism and/or dystopias
What are the main pathways by which each type of authoritarian political system could reduce (or increase) the expected value of the long-term future?
E.g., increasing the rate or severity of armed conflict; reducing the chance that humanity has (something approximating) a successful long reflection; increasing the chances of an unrecoverable dystopia.
Risk and security factors for (global, stable) authoritarianism
How much would each of the “risk factors for stable totalitarianism” reviewed by Caplan (2008) increase the risk of (global, stable) authoritarianism (if at all)?
How likely is the occurrence of each factor?
What other risk or security factors should we focus on?
What effects would those factors have on important outcomes other than authoritarianism? All things considered, is each factor good or bad for the long-term future?
E.g., mass surveillance, preventive policing, enhanced global governance, and/or world government might be risk factors from the perspective of authoritarianism but security factors from the perspective of extinction or collapse risks (see also Bostrom, 2019).
What are the best actions for influencing these factors?
How likely is it that relevant kinds of authoritarian regimes will emerge, spread (especially to become global), and/or persist (especially indefinitely)?
How politically and technologically feasible would this be?
Under what conditions would societies trend towards and/or maintain authoritarianism or a lack thereof?
What strategic, military, economic, and political advantages and disadvantages do more authoritarian regimes tend to have? How does this differ based on factors like the nature of the authoritarian regime, the size of the state/polity it governs, and the nature and size of its adversaries?
How likely is it that relevant actors will have the right motivations to bring this about?
How many current political systems seem to be trending towards authoritarianism?
How much (if at all) are existing authoritarian regimes likely to spread? How long are they likely to persist? Why?
How likely is it that any existing authoritarian regimes would spread globally and/or persist indefinitely? Why?
Typology of, likelihoods of, and interventions for dystopias
How likely is each type of dystopia to arise initially and then to persist indefinitely?
How bad would each type of unrecoverable dystopia be, relative to each other, to other existential catastrophes, and to other possible futures?
How much should we worry about recoverable or temporary equivalents of each type of unrecoverable dystopia?
E.g., how much would each increase (or decrease) the risk of later extinction, unrecoverable collapse, or unrecoverable dystopia?
What are the main factors affecting the likelihood, severity, and persistence of each type of dystopia?
What would be the best actions for reducing the likelihood, severity, or persistence of each type of dystopia?