Key characteristics for evaluating future global governance institutions

This post outlines a framework of five characteristics that seem most relevant for longtermists to consider when evaluating the impact of a “global agent” (as defined below) like a world government or a strong global governance institution.

This framework began as an attempt to understand how longtermists should evaluate possible world governments. If a world government were to form, it would likely have a large impact on the long-term future.[1] However, there are many forms that a world government could take; consider the differences between a democratic federalist world government and a totalitarian world government. The framework in this post is intended to highlight the most relevant characteristics that distinguish different world governments in terms of impact.

This framework can also be applied more broadly to consider the longtermist impacts of entities besides world governments. For example, a global governance institution that enforces agreements between countries might have a similar longtermist impact to a world government. As a result, we use the term “global agent” in this post to refer to any agent that takes actions in a global context (but with the understanding that the framework is most relevant for global agents that have authority over large proportions of civilization).[2]

Global agents that could be analyzed with this framework include:

  • A world government

  • A global governance institution with limited power (like the UN)

  • A hegemonic state without complete global control

  • A powerful multinational company

Key takeaways

The following characteristics seem most relevant for longtermists to consider when evaluating the impact of a global agent

  • Benevolence: how aligned the global agent’s values are with increasing the expected value of the long-term future

  • Competence: how effectively the global agent can make plans to achieve its goals

  • Power: the non-intellectual abilities or resources that would help a global agent execute its plans

  • Stability: the extent to which (a) the global agent’s values won’t change and/​or (b) the agent’s capabilities (i.e., competence and power) won’t decrease

  • Scope: the proportion of civilization under the authority or control of the global agent

The first three characteristics are essentially the same as the three factors discussed in the BIP framework (Aird & Shovelain 2020): benevolence, intelligence (which we call “competence” here)[3], and power. This post puts these characteristics in the context of global agents like governments or institutions, and also analyzes two additional characteristics—stability and scope—which are useful to consider in the specific context of global agents.

The implications of the BIP framework apply here, and are copied below:

Implications include that it’s likely good to:

  1. Increase actors’ benevolence.

  2. Increase the intelligence of actors who are sufficiently benevolent

  3. Increase the power of actors who are sufficiently benevolent and intelligent

And that it may be bad to:

  1. Increase the intelligence of actors who aren’t sufficiently benevolent

  2. Increase the power of actors who aren’t sufficiently benevolent and intelligent

Taking into account stability and scope, some other implications include:

  • It may be bad to increase the capabilities (i.e., competence or power) of even a benevolent global agent if its values are not stable, in particular if its values have a high enough chance of becoming negative over time.[4]

  • It may be good to decrease the stability of a global agent that is not sufficiently benevolent, i.e., make its values more likely to change or make the agent more likely to collapse.

  • It may be good to reduce the scope of a global agent that is not sufficiently benevolent, e.g., by preventing it from becoming a singleton (i.e., the world’s dominant decision-making authority.)

As this framework takes a longtermist perspective, it will mainly focus on how the characteristics of a global agent will affect the following outcomes:

Some caveats on this framework:

  • Some entities that we wish to analyze, like governments or institutions, might only abstractly be modelled as cohesive agents. In reality, they consist of many distinct components interacting with each other, and the components might have different levels of benevolence, competence, power, etc., than the whole.

  • The five characteristics outlined in this framework are not independent of each other, and perhaps some of them could have been merged. There are also likely other categories of characteristics that are important to consider but which haven’t been included here.

  • “Global agent” could also refer to an agent that acts on an interplanetary scale.

Benevolence

From the BIP framework:

By benevolence, we essentially mean how well an actor’s moral beliefs or values align with the goal of improving the expected value of the long-term future.

While different global agents’ values differ along many different dimensions, we’re defining benevolence to be a simplified characteristic that evaluates those values based on how they affect the long-term future.[5]

A global agent like a government might not have values in the same way an individual would, but its values might be indicated in the following ways:

  • For organizations, values might be indicated by a founding document or mission statement (though in many cases these explicit values may be quite different from the organization’s implicit values).

  • If the global agent is strongly centralized, the values of the key decision makers might be especially indicative of the values of the global agent as a whole.

  • For democratic governments, the values of constituents will likely determine the values of the government at least in part.

  • If AI plays a large role in the global agent’s decision making, the values of the AI (if any) will be relevant.

To determine the benevolence of a global agent, you might ask the following questions:

  • Does the global agent value the welfare of all humans?

  • Does the global agent value the welfare of nonhuman animals, and to what extent?

  • Does the global agent value the welfare of digital sentience, and to what extent?

  • Does the global agent value the future of civilization?

The benevolence of a global agent will determine whether its actions will be positive or negative for the long-term future. For some concrete examples:

  • A global agent may explicitly not care about the long-term future, prioritizing short-term gains over long-term welfare (by e.g. elevating existential risk if that happens to improve short-term welfare, profit, power, etc.).

  • A global agent may not care about some forms of suffering that may be morally important, leading to an ongoing moral catastrophe (of e.g. animals or digital sentience).

  • A global agent may choose never to expand beyond Earth when it’s otherwise possible, limiting the potential of civilization.

Competence

From the BIP framework:

By intelligence [which this post calls “competence”], we essentially mean any intellectual abilities or empirical beliefs that would help an actor make and execute plans that are aligned with the actor’s moral beliefs or values. Thus, this includes things like knowledge of the world, problem-solving skills, ability to learn and adapt, (epistemic) rationality, foresight or forecasting abilities, ability to coordinate with others, etc.

The competence of a global agent could be determined by analyzing a few key factors:

  • Quality of information. Global agents might have better information through tools like surveillance or by accessing experts who know more about a particular domain.

  • Quality of intelligence. Global agents might have access to high-intelligence individuals or systems to make better decisions, like expert advisers that make recommendations to decision makers. A global agent might also be able to augment human intelligence through e.g. embryo selection or through the use of AI assistants. AI might also be able to make decisions directly.

  • Internal coordination. Decision making of a global agent depends a lot on the structures in place internally. For example:

    • A structure that allows individual parts of the system to veto decisions could lead to status quo bias or decision paralysis.

    • Checks and balances could prevent some (otherwise misaligned) part of government from perverting its mission.

    • Centralization of decision-making authority to just a small number of individuals makes coordination problems less relevant. This can allow for decisive action, as in the case of the Wuhan lockdown in 2020, but can also cause bad decisions to be unchecked, as in the case of various decisions by 20th century totalitarian regimes (Caplan 2008).[6]

  • Incentives guiding the internal components. Internal components of a global agent, like individuals in a bureaucracy or departments in a company, might be following incentives that lead to reduced competence of the whole. For example:

    • Democratically elected officials might follow policies that increase chances that they are re-elected rather than those that most help their country.

    • Party officials in an authoritarian country might need to signal their loyalty to the party in a way that trades off with good governance.

    • A department of a company might be rewarded according to metrics that are only loose proxies for success of the whole company.

Some illustrative examples of how competence can affect the long-term future:

  • If a world government with low competence mismanages risks, it might increase the chance of civilization collapse due to an AI catastrophe.

  • A democracy characterized by political gridlock might slow down the speed with which valuable technological developments are deployed.

  • Mistaken beliefs about which beings are or would be conscious could lead to ongoing moral catastrophe if e.g. factory farming continues or suffering digital minds are brought into existence.

Power

From the BIP framework:

By power, we essentially mean any non-intellectual abilities or resources that would help an actor execute its plans (e.g., wealth, political power, persuasive abilities, or physical force).

It’s clear that the power of a global actor would be relevant to determining its influence on the long-term future. Some examples to illustrate the importance of power include:

  • A powerful global actor could have the enforcement capability to effectively prevent dangerous use of biotechnology or AI, reducing existential risk (Bostrom 2019).

  • A powerful global actor could ignore or determine the preferences of its constituents, which could allow it to lock in a moral catastrophe.

  • A powerful world government could more quickly or capably expand civilization (e.g. support a larger population).

While I think determining the power of a global agent might typically be straightforward, some caveats apply. While some forms of power may be useful independent of other agents, like material goods or physical force, some forms of power rely on the context of other agents. Some examples of this type of power could include:

  • A military alliance with another global agent

  • For a world government, recognition from regional governments

Stability

Stability is the degree to which the above factors (benevolence, competence, power) will not change in a way that disempowers the global agent.[7] For example, if the values of a government can substantially change easily or frequently (e.g. through succession of leadership), it has low stability. Similarly, if its ability to affect the world can decrease substantially in a short period of time (e.g. the government collapses), it has low stability.

Stability can be broken down into two major factors: value stability and capability stability.

Value stability refers to the degree to which the values (and thus the benevolence) of the global agent will stay the same. In governments, succession of leadership positions can decrease value stability, especially in situations where leaders hold a lot of decision-making authority. For example, some totalitarian regimes of the 20th century collapsed in part due to values of the government shifting after the succession of the leader (Caplan 2008).[8]

Here are some ways that value stability of global agents could be changed:

  • The lifespan of leaders could be extended.

  • High-effectiveness surveillance or mind reading technologies could root out dissent from government leadership, increasing value stability (Rafferty 2020).

  • Stakeholders, or ”who has a say” in a democratic system, might change over time.

  • Genetic engineering could make leaders or other decision-makers more likely to be aligned with certain values.

  • Social engineering (through propaganda, surveillance, etc.) could make the values of society more aligned with the values of the government, increasing stability.

  • The government might explicitly allow for value change through, e.g., constitutional amendments or other mechanisms.

  • AI might substantially increase value stability, especially if it is difficult to modify the values of the AI (due to, e.g., its power).

  • Competition (e.g., competition between parts of government, individuals in government, corporations) might cause value erosion over the long term due to evolutionary dynamics (Bostrom 2004).[9]

Capability stability refers to the extent to which the global agent is likely to keep its capabilities (power + competence) at or above its current levels.

  • A government that increases extinction risk has lower stability, as it is more likely to be destroyed.

  • A government that prevents competing governments, coups, or rebellions from gaining power has higher stability.

  • A federalist world government that allows for regional governments to take back power or exit the federation has lower stability.

In general, I think the types of instability that are most relevant to a global agent’s long-term future impact are changes in values or complete losses of capability (i.e. collapse).

There are many cases that don’t fall neatly into either value stability or capability stability[10], but this breakup still seems useful.

Scope

Scope refers to both the proportion of civilization under the global agent’s authority or control and the extent of that control. Perhaps a more informal way of understanding scope is “to what extent should we consider the global agent to be a singleton?” or “how close is the global agent to having total control of civilization?”

Scope is an important characteristic to consider due to the unique considerations of singleton scenarios (compared to multipolar scenarios). For example:

  • A singleton might be more stable than a global agent in a multipolar scenario due to having no competition.

  • Multipolar dynamics lead to evolutionary pressure, which can cause value drift.

Additionally, we might have different moral intuitions about the value of situations in which a singleton rules civilization and situations where a global agent controls most, but not all, of civilization. For example, compare the following two scenarios:

  1. All of the value of the future is lost to a singleton with values orthogonal to humans[11]

  2. 5% of the value of the future is captured by an aligned global agent (while the other 95% is captured by an agent that produces no moral value)

Intuitively, the second situation is vastly preferable to the first situation. This might be a reason to reduce the scope of agents that are not sufficiently benevolent.[12]

Analysis of characteristics

In addition to the implications explained in the post about the BIP framework, we can draw additional conclusions about the other characteristics and in the context of global agents:

  • We should be cautious about increasing capabilities of a benevolent global agent if value stability is low (e.g., if decision-making is centralized and there’s high variance in the values of successors). For example, technologies like surveillance or military equipment might be used responsibly by a benevolent government, but if the values of that government can quickly change, then we need to be concerned about those capabilities in the wrong hands.

  • We may want to decrease the stability of global agents that are not sufficiently benevolent. This might be done by pushing against tactics like persuasion-related technologies, making an effort to decentralize power, or if it’s truly preferable, even increasing extinction risk.

  • We may want to reduce the scope of non-benevolent global agents, i.e., ensure holdouts of moral value in the face of a non-benevolent global agent. This might encourage longtermists to create strongholds of benevolent values. Analogously, longtermists might seek to prevent strongholds of malevolent values that would cause ongoing moral catastrophes.

Directions for future research

Longtermist research on world governments or other strong global governance institutions seems to be at an early stage, and the framework in this post is only a step in clarifying some key concepts.

Future research topics that seem especially relevant include:

  • What world governments or strong global governance institutions seem most likely to occur in the future?

    • Of the most plausible scenarios, which ones have the greatest impact?

    • How can longtermists affect how these scenarios play out, either by mitigating greatest risks or improving the likelihood of good outcomes?

  • What types of global governance initiatives would move institutions in the directions we want (e.g. higher benevolence, then higher value stability, etc.)?

It’s unclear to what extent longtermists should be conducting research on this topic now since it doesn’t seem likely that a world government will form anytime soon. That said, there has been little enough research on this topic from a longtermist perspective that the value of information seems high.

Credits

This research is an intern project of Rethink Priorities. It was written by Juan Gil. Thanks to Michael Aird, Tom Barnes, Marie Davidsen-Buhl, Lizka Vaintrob, Peter Wildeford, and Linch Zhang for helpful feedback. If you like our work, please consider subscribing to our newsletter. You can see more of our work here.

Bibliography

Aird, M., & Shovelain, J. (2020, July 20). Improving the future by influencing actors’ benevolence, intelligence, and power. Retrieved September 23, 2021, from https://​​forum.effectivealtruism.org/​​posts/​​4oGYbvcy2SRHTWgWk/​​improving-the-future-by-influencing-actors-benevolence

Bostrom, N. (2004). The Future of Human Evolution. Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, 339-371.

Bostrom, N. (2019). The Vulnerable World Hypothesis. Global Policy, 10(4), 455-476. doi:10.1111/​1758-5899.12718

Bostrom, N. (2006). What is a Singleton? Linguistic and Philosophical Investigations, 5(2), 48-54.

Caplan, B. (2008). The totalitarian threat. Global Catastrophic Risks. doi:10.1093/​oso/​9780198570509.003.0029

Rafferty, J. (2020, August 10). A new X-risk factor: Brain-computer interfaces. EA Forum. Retrieved September 23, 2021, from https://​​forum.effectivealtruism.org/​​posts/​​qfDeCGxBTFhJANAWm/​​a-new-x-risk-factor-brain-computer-interfaces-1.

Notes


  1. ↩︎

    A world order with a single dominant decision-making authority, like a world government, is sometimes referred to as a singleton. The emergence of a singleton could mitigate some types of existential risks by solving global coordination problems. However, a singleton would also make it much easier to “lock in” values, including bad values, increasing the risks of some other existential catastrophes (Bostrom 2005).

  2. ↩︎

    Countries like the United States or China are “global agents” since they take actions in a global context, but this framework is less useful to evaluate their longtermist impact since they have authority over smaller proportions of civilization.

  3. ↩︎

    We use the term “competence” instead of “intelligence” since the former more intuitively covers some aspects of this characteristic in the context of institutions, like ability to coordinate internally.

  4. ↩︎

    However, efforts to increase the value stability of a benevolent global agent will probably involve increasing its capabilities to some extent since these characteristics are not independent of each other.

  5. ↩︎

    The values don’t need to explicitly be about the long-term future, but we’ll be evaluating how aligned the values are with improving the long-term future. This might lead to some unusual classifications of global agents as “benevolent”. For example, a world government, even one that treats people poorly for some time, might reduce extinction risk by mere virtue of existing and removing the competitive dynamics that would have existed otherwise. This world government might therefore be considered somewhat “benevolent”.

  6. ↩︎

    From Caplan’s “The totalitarian threat”: “Another notable problem with totalitarian regimes was their failure to anticipate and counteract events that even their leaders saw as catastrophic. Stalin infamously ignored overwhelming evidence that Hitler was planning to invade the Soviet Union. Hitler ensured his own defeat by declaring war on the United States. Part of the reason for these lapses of judgment was concentration of power, which allowed leaders’ idiosyncrasies to decide the fates of millions. But this was amplified by the fact that people in totalitarian regimes are afraid to share negative information. To call attention to looming disasters verges on dissent, and dissent is dangerously close to disloyalty.”

  7. ↩︎

    Stability is an extension to the BIP framework that seems valuable in the context of global agents since we might be analyzing longer time scales, and the agents are often organizations that can have meaningful value change over time. Stability might also be viewed as the “derivative” of the three characteristics in the BIP framework, since we’re interested in how those characteristics might change over time.

  8. ↩︎

    From Caplan’s “The totalitarian threat”: “Khrushchev’s apostasy from Stalinism was perhaps unforeseeable, but the collapse of the Soviet Union under Gorbachev could have been avoided if the Politburo considered only hard-line candidates. The Soviet Union collapsed largely because a reformist took the helm, but a reformist was able to take the helm only because his peers failed to make holding power their top priority.”

  9. ↩︎

    For more on value erosion due to evolution, see these explanations from FHI.

  10. ↩︎

    For example, suppose a military coup succeeds in taking control of the government and establishes new values. This could be viewed either as the government staying roughly the same but changing values or as a collapse of the former government (thus making it lose its capabilities).

  11. ↩︎

    For example, a self-replicating machine with no consciousness that spreads across the universe.

  12. ↩︎

    In most situations, this might be the same as just reducing the capabilities of the global agent, which is already covered above. However, this might also mean empowering other benevolent global agents to capture a larger proportion of civilization.

No comments.