The Effective Altruism (EA) community’s skepticism of institutions and implied belief in externally solving problems blinds it to the most important vector to solving or mitigating humanity’s existential risks. Most, if not, every long term risk that humanity faces in the next 100 years can only be solved if we have better governments and institutions. Longtermism as practiced by the EA community misses the forest for the trees.
EA views the next century as possibly the most important in our history. Our technological achievements have come breathtakingly fast, while our ability to handle them now and in the near future has not kept pace. Humanity is at risk of destroying or permanently impairing its ability to flourish via: nuclear war, pandemics and bioweapons, AGI risk, environmental catastrophe, and worldwide autocracy. (There is also asteroid risk which is independent of human development, but which can be mitigated by technological progress.) Without litigating the relative weighting of these risks, I believe that the largest source of long-term risk are inept institutions and autocratic governments: the only way to mitigate these risks are robust and capable democracies and institutions.
To an individual learning about EA, the community’s focus on AGI risk weakens the movement’s message and ability to evangelize. EA’s focus on AGI risk looks like institutional capture rather than an adherence to the principle of doing the most good through rigorous analysis and measurement. If EA wants to continue to be a community that improves the wellbeing of people today and in the near future through charity and research, it should abandon AGI safety research as a cause.
Many of our current institutions have decayed to the point where they don’t listen to outside voices. Furthermore, most systems don’t change from the outside except through failure and collapse. If EA wants to become a movement that changes the present and the future for the better, it cannot sit outside of institutions issuing directives and policy advice.
Any movement that changes the world has to participate in and improve its systems. Tomorrow’s problems cannot be solved without better institutions. Any country with capable institutions and a government that prioritizes the wellbeing of all of its citizens would do far more to solve today’s problems than any amount of outside charity, no matter how carefully targeted.
As a side note, I live in the US, so my citations and examples will have a markedly American worldview due to the news and history I have consumed. As a large democracy with the highest GDP in the world, the US can have an outsized impact on many of these risks. That said, I think these arguments apply to every government and all of our public institutions.
Existential Risks and why governments and institutions are the key to solving each of these issues
I’ll walk through each risk to show that each risk:
can be mitigated by capable governments and institutions that prioritizes the wellbeing of all of its citizens
that some of these risks stem from or are exacerbated by inept governments and institutions
that some of these risks are caused by or stem from autocratic governments.
Nota bene, I treat each of these risks as existential without commenting on their relative likelihood.
AGI Risk
The long term risks break down into two categories:
Misuse: AGI is used by an autocracy to maintain and enforce an institution that stymies human flourishing and growth for eons.
Accidents: AGI systems destroy humans incidentally while performing a task
AGI Misuse:
In the case of AGI misuse, we’re already seeing China using technology and machine learning to stifle dissent and limit freedoms. At the same time, we’re seeing populism and nationalism growing in democracies around the world1. Historically the rise of autocracies have often been preceded by the rise of populism. Proving this relationship is causal rather than correlative is outside the scope of this criticism, but would ask the reader to do some research if they doubt this relationship. This leads to the conclusion that AGI risk from autocracies is present and growing. The solution to this problem is democratic states with robust institutions that foster technological development that outpace adversarial autocracies.
AGI Accidents:
I’d like to start by distinguishing between general AI safety and Catastrophically Capable AGI (CCAGI) safety. General AI safety as a field already exists without EA because there are significant financial incentives to prevent AI accidents in physically capable systems. Liability risks and insurance costs strongly incentivize AI safety research for self-driving cars and any other physically capable AI system. This is important work that is currently being funded in ways that I assume are outcome oriented and measurable. I argue that most, if not all, progress in the field of AI safety will occur here due to the alignment of incentives (financial and liability risk) and the measurability of results.
As for CCAGI, I argue that most, if not all, of the research being supported by EA is focused on in situ solutions. EA envisages a CCAGI as a system demonstrating general intelligence that far exceeds humans, lacking values (but probably having specific safety parameters), and capable of rapidly solving problems in a variety of domains. The issue with in situ solutions are:
EA imagines that we understand what CCAGI systems will look like enough to build programmatic guard rails for these systems.
CCAGI could look so different to today’s technology that it would be the equivalent of asking a group of physicists and engineers to work on nuclear power safety ten years before the Manhattan project.
The assumption that we are smart enough to programmatically implement AI safety in CCAGIs that we expect to be smarter than us before we even know what those systems look like is almost comically illogical.
Taken together, the only reliable way to implement CCAGI safety would be a (mostly) physical safety framework:
CCAGI systems would need to be built with an airgap
these systems would need to have limited to no physical capabilities (including communicating over airwaves)
perhaps these systems would have access to the outside world, but it would have to be carefully prescribed by a tightly limited query language
Based on my assumptions, EA’s present AGI Safety strategy is counterproductive because it leads to a false sense of security by endorsing the idea that we can solve the problem by outsmarting the systems with code.
If AGI is the existential risk that EA claims, the only way to ensure that a super-intelligent and amoral system doesn’t incidentally or intentionally destroy human civilization depends on the physical limitations placed on CCAGIs. That depends on institutions and laws that develop and enforce physical safety requirements.
Asteroid Risk
Imagine an asteroid large enough to end civilization is identified as having a collision course with Earth. The sooner the trajectory of the object was altered, the higher our chances of survival. Thus, every second matters for assembling multiple payloads (redundancy) of nuclear weapons and loading them onto every capable spaceship. A lot of this depends on the speed at which nuclear enabled governments:
agree on direction of trajectory alteration—we don’t want Russia and the US picking opposing trajectory perturbations.
came to agreements with public and private space agencies (NASA, SpaceX, Roscosmos, ESA, etc) to schedule launches
transferred nuclear weapons to space shuttles and implemented workable detonation capabilities
cleared flight paths
A government with institutions controlled by egocentric individuals that are more concerned with personal status, institutional status, and following protocols would significantly slow the response times when every second could matter. Competent institutions increase the chances that civilization avoids a devastating asteroid impact.
Environmental Catastrophe
If you believe in the risk of near-term environmental catastrophe, then you probably believe that our current paths of consumption could lead to irreparable environmental damage. This view holds that activities and products that individuals and corporations choose to consume do not have a cost even remotely commensurate with their environmental externalities2. This leads to behaviors such as consumers choosing long single-occupancy commutes in large vehicles, disposable products, cheaply priced, but environmentally costly products, and far more private jets and megayachts than the world can sustain.
It is not the world we live in today, but imagine if the United States3 was a place with two parties that were composed of competent individuals that recognized the risks of global warming and pollution and were more concerned with the wellbeing of its citizens today and in the near future. In that US we would see:
A simple tax on carbon fuels4> that rises over time in a slow and predictable way to allow individuals and corporations to plan.
Institutions capable of measuring the costs of the tax and empowered to grant temporary tax concessions to crucial industries that would suffer from high fuel costs.
A cash transfer to the poorest individuals that more than subsidizes the burden so that the poorest are better off while also encouraging more economical means of transportation.
A tax on imported goods from countries where carbon emissions were not taxed to commensurately reflect the externalities of carbon emissions.
A drop in other types of taxes to offset the tax gains from fuel taxes
A huge investment in mass transit that would further reduce the cost of transportation for most Americans while reducing America’s carbon footprint.
The cost to use private jets and yachts5 would be far higher reflecting the environmental cost and converting at least some of that negative externality into a consumer cost and reducing their overall use.
The carbon tax would encourage industries to become more efficient and encourage more innovation in renewables without trying to prematurely choose the most effective technologies.
These technologies would flow to poorer countries that could not afford to tax gas at the same rate as the US
The US would make credible emissions commitments to other countries
Perhaps the US would even subsidize the costs of technological innovations that would reduce emissions in developing countries.
To avoid environmental catastrophe, we need to change the way we consume resources. By necessity improved policies require governments to enact legislation and enforce rules that shift the environmental externalities of consumption to ones that are paid by consumers.
Pandemic Risk
The last few decades have seen huge strides in science that allow humanity to confront pandemics with far better tools than most could have imagined even a few decades ago. At the same time, our governments and institutions appear less capable than they once were.
Arguably, they are less capable of addressing a pandemic than they were 65 years ago. “In April 1957, a new strain of a lethal respiratory virus emerged in East Asia. . . The pandemic of 1957-58 ultimately caused 1.1 million deaths worldwide, and it follows the 1918 crisis as the second-most severe influenza outbreak in US history. Some 20 million Americans were infected, and 116,000 died. Yet researchers estimate that a million more Americans would have died if not for the pharmaceutical companies that distributed 40 million doses of Hilleman’s vaccine that fall, inoculating about 30 million people.”6
Moderna took 43 days to deliver the first box of vials to the NIH for testing without interacting with the virus7. The vaccine was delivered to the NIH on February 24, 2020, but wasn’t approved until December 18th8. It took more than twice as long for the US to begin delivering the covid vaccine than it did 1957 H2N2 vaccine.
When we look at the US’s response to monkeypox, it is clear that America’s public health systems remain woefully inadequate even two years after the start of a pandemic that cost trillions of dollars and millions of lives. When monkeypox started spreading in the US, we learned that:
20 million doses of a smallpox vaccine (Jynneos) that protects against monkeypox had expired9
there were only 2,400 usable doses left in America’s “Strategic National Stockpile”10
federal officials chose not to replenish the expired vaccine doses and instead invested more money in developing a freeze-dried vaccine with a longer shelf life11
the phase 3 of the freeze-dried Jynneos trial with the FDA has been dragged out since 201712
realizing that having no vaccine due to the delays was a problem, “the United States purchased vast quantities of raw vaccine product, which has yet to be filled into vials.”13
“The raw, unfinished vaccine remains stored in large plastic bags outside Copenhagen, at the headquarters of the small Danish biotech company Bavarian Nordic, which developed Jynneos and remains its sole producer.”14
This is clearly the behavior of inept institutions (with a quite large budget) rather than a country whose institutions have learned lessons from a recent pandemic. As an aside, in a different world, the 20 million expired smallpox doses would have been used to prevent monkeypox outbreaks in Africa15.
Polio, a disease once eradicated from the shores of North America, is now circulating in NYC. I will leave it as an exercise to the reader to understand what institutional failures lead us to this outcome.
The medical and technological tools to fight pandemics have advanced remarkably, but our institutions are less capable than they were 50 years ago. Improving our public health system and government capacity to respond to pandemics is the best way to reduce pandemic risk.
Bioweapons Risk
Right now, bioweapons risks look similar to nuclear risks. It would take a sophisticated state actor to design a targeted disease that could target a particular population. As biological technologies improve over time, it will be easier to build virulent and targeted bioweapons that could wipe out a large swath of humans or a subset based on certain biomarkers. This means that over time adversarial actors will need less funding and capabilities to create and release bioweapons.
The ways to reduce and prevent these risks are similar to those for nuclear:
Strong states run by state actors with coherent and consistent policies on the use of bioweapons with a threat of response that is credibly discouraging
International institutions that can monitor and control access to technologies capable of building bioweapons
Said technology would also be capable of doing fantastic genetic work of huge value, so competent institutions that can protect against bioweapons while allowing technological progress is important.
Worldwide Autocracy
This could occur when:
all states fail or devolve into autocracy
one autocratic state gains a technological advantage that allows it to take over the world
The only way to prevent autocracies from taking over the world is strong democratic states with capable institutions that are better at fostering technological innovation than adversarial autocracies.
Nuclear War
An extremely simplified framework:
Nuclear armed powers are more likely to wage (proxy) wars between each other when one or more states behave inconsistently.
The smaller the number of people determining whether to use nuclear weapons, the greater the risk that nuclear weapons will be used.
Autocracies are more likely to wage wars of expansion because the interests of those in power are very narrow. A few individuals making the decisions for an entire country can value territorial expansion as more important than the risk of nuclear war.
A small group of individuals in a failed state could decide to use nuclear weapons.
The risks of nuclear war (and existential risk) increase when:
State institutions responsible for charting international policy are weak or inept
Political actors of states act inconsistently relative to the states’ charted or incoherent international policies towards nuclear powers. This could be due to any combination of narrow self interests, an incomplete view of risks, or incompetence.
Democratic states are replaced by autocratic governments with narrow self interests that view territorial expansion as more important than the risks of nuclear war.
These points argue for consistent behavior by democratically elected states. This requires strong and capable state departments that can understand the complexities of their relationships and can convey strategy to state actors. By extension, this relies on state actors that place the interests of the state above their own political interests when it comes to international diplomacy.
Conclusion
We have created governments and institutions that have built world-spanning infrastructure, ensured stability and the safety of its citizens, and enshrined and protected (with mixed results) human rights that enable human flourishing. Yet today, our governments and public institutions have become incapable of thinking systemically about their purported areas of expertise, disinclined to listen to outside experts, and unable to accomplish their intended missions. In short, inept. This is not always how our institutions behaved. Recall the CDC of 65 years ago versus today. Or, look at the US military’s “development of the Polaris nuclear-missile system in the late 1950s. The whole package—a nuclear submarine, a solid-fuel missile, an underwater launch system, a nuclear warhead and a guidance system—went from the drawing board to deployment in four years (and using slide rules). Today, according to the Defense Business Board, the average development timeline for much less complex weapons is 22.5 years.16” This decay of institutions isn’t unique to the US. The EU approved its first vaccine after the US. The EU allowed Russia to invade Ukraine, an ally and neighbor even after having twice witnessed the annexation of territories in Ukraine and Georgia in the last 15 years. Quite simply, our democratic governments and their institutions look incapable of tackling the challenges of existential risk today.
EA’s strategy of identifying future risks and funding think-tanks conveys an image of progress that can be achieved through groups of individuals removed from the issues at hand and without systemic power to implement solutions. This strategy implies that EA thinks it can reduce existential risk by funding initiatives and institutions that help to solve long term risks and that these institutions can then benignly direct governments and institutions. But if our governments and institutions are inept, it’s likely they won’t listen to experts, outside institutions, or prediction markets until their protocols and brittle heuristics fail spectacularly and perhaps with existential consequences.
EA’s approach to longtermism separates the wellbeing of people today from those of future humans. However, there are ways to improve our collective future while focusing on the wellbeing of those who are living today. Instead of looking like a community that effectively spends its resources, EA’s focus on AGI safety looks like an institution captured by a well respected subset of the group’s members (ML & AI researchers) that pushed a thesis that gives them more status within the EA community and high-status jobs without measurable deliverables. Furthermore, if there’s no way to measure if AI safety research is making progress on protecting us from Catastrophically Capable AGI, then there’s no way to know if EA is making progress in risk reduction. I fail to see how this is effective.
EA’s current long term focus is antithetical to its goal of broad evangelism. Two hypothetical poll questions:
Would you like to belong to a movement that advocates for and devotes significant resources to reducing AI risk?
Would you like to work in the highest paying job you can get and then donate most of your income to an organization that espouses AI risk as the most tractable long term threat?
How do you think non-EA members would respond? Or to cater to EA biases, how do you think STEM graduates would respond? I suspect it would poll very badly. I suspect that a movement that tries to solve tomorrow’s problems in a way that doesn’t contribute to humanity’s wellbeing today will fail.
Systemic change looks intractable, but I believe it’s the most impactful way of changing the trajectory of civilization. I assume EA has avoided focusing on systemic change because the problem looks intractable, but institutional and governmental reform is in fact the most pressing problem for the future and today. A movement that espouses improving the wellbeing of humans today and in the future via systemic change has much broader appeal than “work hard in finance, then donate your money to the cause of saving tomorrow’s people.” In such a movement, each individual could choose the institution and work that most closely corresponded with their skills and interests.
I don’t know how to reform our institutions or improve our governance. That’s what I’m asking EA to consider doing. A first approximation of such a movement would look like:
develop principles for reforming institutions and political parties
develop principles for identifying and supporting politicians regardless of party who demonstrate:
the capacity to think systemically in their campaigns and policy choices
eschewal of maximalist thinking
a support for the wellbeing of all citizens
support individuals within institutions and parties that are working towards these goals
encourage community members to join the most critical government institutions and to run for office
almost every institution needs reform and improvement.
this encourages each member to choose work that feels meaningful and motivates them
advocate for election and party reforms
If EA considers systemic change too hard, the community should focus on effective charity and problems that more clearly contribute to solving today’s problems. If the EA community wishes to grow, it should avoid solving problems that do not improve the wellbeing of people today. If the EA community wishes to effect change, it should avoid causes where the results of the work can’t be measured.
If the EA community wishes to tackle humanity’s greatest problems, it must architect and execute a plan to reform our institutions and governments from within. This is an aspirational goal that can inspire a movement that changes the world.
I chose the US because it’s the second largest polluter in the world as well as having a high enough income to be able to afford a tax that transfers wealth from rich carbon emission consumers to the poor.
The optimal way to reduce consumption of a good with negative externalities is to raise it’s cost to more closely reflect those externalities via taxes.
Criticism of EA and longtermism
Summary
The Effective Altruism (EA) community’s skepticism of institutions and implied belief in externally solving problems blinds it to the most important vector to solving or mitigating humanity’s existential risks. Most, if not, every long term risk that humanity faces in the next 100 years can only be solved if we have better governments and institutions. Longtermism as practiced by the EA community misses the forest for the trees.
EA views the next century as possibly the most important in our history. Our technological achievements have come breathtakingly fast, while our ability to handle them now and in the near future has not kept pace. Humanity is at risk of destroying or permanently impairing its ability to flourish via: nuclear war, pandemics and bioweapons, AGI risk, environmental catastrophe, and worldwide autocracy. (There is also asteroid risk which is independent of human development, but which can be mitigated by technological progress.) Without litigating the relative weighting of these risks, I believe that the largest source of long-term risk are inept institutions and autocratic governments: the only way to mitigate these risks are robust and capable democracies and institutions.
To an individual learning about EA, the community’s focus on AGI risk weakens the movement’s message and ability to evangelize. EA’s focus on AGI risk looks like institutional capture rather than an adherence to the principle of doing the most good through rigorous analysis and measurement. If EA wants to continue to be a community that improves the wellbeing of people today and in the near future through charity and research, it should abandon AGI safety research as a cause.
Many of our current institutions have decayed to the point where they don’t listen to outside voices. Furthermore, most systems don’t change from the outside except through failure and collapse. If EA wants to become a movement that changes the present and the future for the better, it cannot sit outside of institutions issuing directives and policy advice.
Any movement that changes the world has to participate in and improve its systems. Tomorrow’s problems cannot be solved without better institutions. Any country with capable institutions and a government that prioritizes the wellbeing of all of its citizens would do far more to solve today’s problems than any amount of outside charity, no matter how carefully targeted.
As a side note, I live in the US, so my citations and examples will have a markedly American worldview due to the news and history I have consumed. As a large democracy with the highest GDP in the world, the US can have an outsized impact on many of these risks. That said, I think these arguments apply to every government and all of our public institutions.
Existential Risks and why governments and institutions are the key to solving each of these issues
I’ll walk through each risk to show that each risk:
can be mitigated by capable governments and institutions that prioritizes the wellbeing of all of its citizens
that some of these risks stem from or are exacerbated by inept governments and institutions
that some of these risks are caused by or stem from autocratic governments.
Nota bene, I treat each of these risks as existential without commenting on their relative likelihood.
AGI Risk
The long term risks break down into two categories:
Misuse: AGI is used by an autocracy to maintain and enforce an institution that stymies human flourishing and growth for eons.
Accidents: AGI systems destroy humans incidentally while performing a task
AGI Misuse:
In the case of AGI misuse, we’re already seeing China using technology and machine learning to stifle dissent and limit freedoms. At the same time, we’re seeing populism and nationalism growing in democracies around the world1. Historically the rise of autocracies have often been preceded by the rise of populism. Proving this relationship is causal rather than correlative is outside the scope of this criticism, but would ask the reader to do some research if they doubt this relationship. This leads to the conclusion that AGI risk from autocracies is present and growing. The solution to this problem is democratic states with robust institutions that foster technological development that outpace adversarial autocracies.
AGI Accidents:
I’d like to start by distinguishing between general AI safety and Catastrophically Capable AGI (CCAGI) safety. General AI safety as a field already exists without EA because there are significant financial incentives to prevent AI accidents in physically capable systems. Liability risks and insurance costs strongly incentivize AI safety research for self-driving cars and any other physically capable AI system. This is important work that is currently being funded in ways that I assume are outcome oriented and measurable. I argue that most, if not all, progress in the field of AI safety will occur here due to the alignment of incentives (financial and liability risk) and the measurability of results.
As for CCAGI, I argue that most, if not all, of the research being supported by EA is focused on in situ solutions. EA envisages a CCAGI as a system demonstrating general intelligence that far exceeds humans, lacking values (but probably having specific safety parameters), and capable of rapidly solving problems in a variety of domains. The issue with in situ solutions are:
EA imagines that we understand what CCAGI systems will look like enough to build programmatic guard rails for these systems.
CCAGI could look so different to today’s technology that it would be the equivalent of asking a group of physicists and engineers to work on nuclear power safety ten years before the Manhattan project.
The assumption that we are smart enough to programmatically implement AI safety in CCAGIs that we expect to be smarter than us before we even know what those systems look like is almost comically illogical.
Taken together, the only reliable way to implement CCAGI safety would be a (mostly) physical safety framework:
CCAGI systems would need to be built with an airgap
these systems would need to have limited to no physical capabilities (including communicating over airwaves)
perhaps these systems would have access to the outside world, but it would have to be carefully prescribed by a tightly limited query language
Based on my assumptions, EA’s present AGI Safety strategy is counterproductive because it leads to a false sense of security by endorsing the idea that we can solve the problem by outsmarting the systems with code.
If AGI is the existential risk that EA claims, the only way to ensure that a super-intelligent and amoral system doesn’t incidentally or intentionally destroy human civilization depends on the physical limitations placed on CCAGIs. That depends on institutions and laws that develop and enforce physical safety requirements.
Asteroid Risk
Imagine an asteroid large enough to end civilization is identified as having a collision course with Earth. The sooner the trajectory of the object was altered, the higher our chances of survival. Thus, every second matters for assembling multiple payloads (redundancy) of nuclear weapons and loading them onto every capable spaceship. A lot of this depends on the speed at which nuclear enabled governments:
agree on direction of trajectory alteration—we don’t want Russia and the US picking opposing trajectory perturbations.
came to agreements with public and private space agencies (NASA, SpaceX, Roscosmos, ESA, etc) to schedule launches
transferred nuclear weapons to space shuttles and implemented workable detonation capabilities
cleared flight paths
A government with institutions controlled by egocentric individuals that are more concerned with personal status, institutional status, and following protocols would significantly slow the response times when every second could matter. Competent institutions increase the chances that civilization avoids a devastating asteroid impact.
Environmental Catastrophe
If you believe in the risk of near-term environmental catastrophe, then you probably believe that our current paths of consumption could lead to irreparable environmental damage. This view holds that activities and products that individuals and corporations choose to consume do not have a cost even remotely commensurate with their environmental externalities2. This leads to behaviors such as consumers choosing long single-occupancy commutes in large vehicles, disposable products, cheaply priced, but environmentally costly products, and far more private jets and megayachts than the world can sustain.
It is not the world we live in today, but imagine if the United States3 was a place with two parties that were composed of competent individuals that recognized the risks of global warming and pollution and were more concerned with the wellbeing of its citizens today and in the near future. In that US we would see:
A simple tax on carbon fuels4> that rises over time in a slow and predictable way to allow individuals and corporations to plan.
Institutions capable of measuring the costs of the tax and empowered to grant temporary tax concessions to crucial industries that would suffer from high fuel costs.
A cash transfer to the poorest individuals that more than subsidizes the burden so that the poorest are better off while also encouraging more economical means of transportation.
A tax on imported goods from countries where carbon emissions were not taxed to commensurately reflect the externalities of carbon emissions.
A drop in other types of taxes to offset the tax gains from fuel taxes
A huge investment in mass transit that would further reduce the cost of transportation for most Americans while reducing America’s carbon footprint.
The cost to use private jets and yachts5 would be far higher reflecting the environmental cost and converting at least some of that negative externality into a consumer cost and reducing their overall use.
The carbon tax would encourage industries to become more efficient and encourage more innovation in renewables without trying to prematurely choose the most effective technologies.
These technologies would flow to poorer countries that could not afford to tax gas at the same rate as the US
The US would make credible emissions commitments to other countries
Perhaps the US would even subsidize the costs of technological innovations that would reduce emissions in developing countries.
To avoid environmental catastrophe, we need to change the way we consume resources. By necessity improved policies require governments to enact legislation and enforce rules that shift the environmental externalities of consumption to ones that are paid by consumers.
Pandemic Risk
The last few decades have seen huge strides in science that allow humanity to confront pandemics with far better tools than most could have imagined even a few decades ago. At the same time, our governments and institutions appear less capable than they once were.
Arguably, they are less capable of addressing a pandemic than they were 65 years ago. “In April 1957, a new strain of a lethal respiratory virus emerged in East Asia. . . The pandemic of 1957-58 ultimately caused 1.1 million deaths worldwide, and it follows the 1918 crisis as the second-most severe influenza outbreak in US history. Some 20 million Americans were infected, and 116,000 died. Yet researchers estimate that a million more Americans would have died if not for the pharmaceutical companies that distributed 40 million doses of Hilleman’s vaccine that fall, inoculating about 30 million people.”6
Moderna took 43 days to deliver the first box of vials to the NIH for testing without interacting with the virus7. The vaccine was delivered to the NIH on February 24, 2020, but wasn’t approved until December 18th8. It took more than twice as long for the US to begin delivering the covid vaccine than it did 1957 H2N2 vaccine.
When we look at the US’s response to monkeypox, it is clear that America’s public health systems remain woefully inadequate even two years after the start of a pandemic that cost trillions of dollars and millions of lives. When monkeypox started spreading in the US, we learned that:
20 million doses of a smallpox vaccine (Jynneos) that protects against monkeypox had expired9
there were only 2,400 usable doses left in America’s “Strategic National Stockpile”10
federal officials chose not to replenish the expired vaccine doses and instead invested more money in developing a freeze-dried vaccine with a longer shelf life11
the phase 3 of the freeze-dried Jynneos trial with the FDA has been dragged out since 201712
realizing that having no vaccine due to the delays was a problem, “the United States purchased vast quantities of raw vaccine product, which has yet to be filled into vials.”13
“The raw, unfinished vaccine remains stored in large plastic bags outside Copenhagen, at the headquarters of the small Danish biotech company Bavarian Nordic, which developed Jynneos and remains its sole producer.”14 This is clearly the behavior of inept institutions (with a quite large budget) rather than a country whose institutions have learned lessons from a recent pandemic. As an aside, in a different world, the 20 million expired smallpox doses would have been used to prevent monkeypox outbreaks in Africa15.
Polio, a disease once eradicated from the shores of North America, is now circulating in NYC. I will leave it as an exercise to the reader to understand what institutional failures lead us to this outcome.
The medical and technological tools to fight pandemics have advanced remarkably, but our institutions are less capable than they were 50 years ago. Improving our public health system and government capacity to respond to pandemics is the best way to reduce pandemic risk.
Bioweapons Risk
Right now, bioweapons risks look similar to nuclear risks. It would take a sophisticated state actor to design a targeted disease that could target a particular population. As biological technologies improve over time, it will be easier to build virulent and targeted bioweapons that could wipe out a large swath of humans or a subset based on certain biomarkers. This means that over time adversarial actors will need less funding and capabilities to create and release bioweapons.
The ways to reduce and prevent these risks are similar to those for nuclear:
Strong states run by state actors with coherent and consistent policies on the use of bioweapons with a threat of response that is credibly discouraging
International institutions that can monitor and control access to technologies capable of building bioweapons
Said technology would also be capable of doing fantastic genetic work of huge value, so competent institutions that can protect against bioweapons while allowing technological progress is important.
Worldwide Autocracy
This could occur when:
all states fail or devolve into autocracy
one autocratic state gains a technological advantage that allows it to take over the world
The only way to prevent autocracies from taking over the world is strong democratic states with capable institutions that are better at fostering technological innovation than adversarial autocracies.
Nuclear War
An extremely simplified framework:
Nuclear armed powers are more likely to wage (proxy) wars between each other when one or more states behave inconsistently.
The smaller the number of people determining whether to use nuclear weapons, the greater the risk that nuclear weapons will be used.
Autocracies are more likely to wage wars of expansion because the interests of those in power are very narrow. A few individuals making the decisions for an entire country can value territorial expansion as more important than the risk of nuclear war.
A small group of individuals in a failed state could decide to use nuclear weapons.
The risks of nuclear war (and existential risk) increase when:
State institutions responsible for charting international policy are weak or inept
Political actors of states act inconsistently relative to the states’ charted or incoherent international policies towards nuclear powers. This could be due to any combination of narrow self interests, an incomplete view of risks, or incompetence.
Democratic states are replaced by autocratic governments with narrow self interests that view territorial expansion as more important than the risks of nuclear war.
These points argue for consistent behavior by democratically elected states. This requires strong and capable state departments that can understand the complexities of their relationships and can convey strategy to state actors. By extension, this relies on state actors that place the interests of the state above their own political interests when it comes to international diplomacy.
Conclusion
We have created governments and institutions that have built world-spanning infrastructure, ensured stability and the safety of its citizens, and enshrined and protected (with mixed results) human rights that enable human flourishing. Yet today, our governments and public institutions have become incapable of thinking systemically about their purported areas of expertise, disinclined to listen to outside experts, and unable to accomplish their intended missions. In short, inept. This is not always how our institutions behaved. Recall the CDC of 65 years ago versus today. Or, look at the US military’s “development of the Polaris nuclear-missile system in the late 1950s. The whole package—a nuclear submarine, a solid-fuel missile, an underwater launch system, a nuclear warhead and a guidance system—went from the drawing board to deployment in four years (and using slide rules). Today, according to the Defense Business Board, the average development timeline for much less complex weapons is 22.5 years.16” This decay of institutions isn’t unique to the US. The EU approved its first vaccine after the US. The EU allowed Russia to invade Ukraine, an ally and neighbor even after having twice witnessed the annexation of territories in Ukraine and Georgia in the last 15 years. Quite simply, our democratic governments and their institutions look incapable of tackling the challenges of existential risk today.
EA’s strategy of identifying future risks and funding think-tanks conveys an image of progress that can be achieved through groups of individuals removed from the issues at hand and without systemic power to implement solutions. This strategy implies that EA thinks it can reduce existential risk by funding initiatives and institutions that help to solve long term risks and that these institutions can then benignly direct governments and institutions. But if our governments and institutions are inept, it’s likely they won’t listen to experts, outside institutions, or prediction markets until their protocols and brittle heuristics fail spectacularly and perhaps with existential consequences.
EA’s approach to longtermism separates the wellbeing of people today from those of future humans. However, there are ways to improve our collective future while focusing on the wellbeing of those who are living today. Instead of looking like a community that effectively spends its resources, EA’s focus on AGI safety looks like an institution captured by a well respected subset of the group’s members (ML & AI researchers) that pushed a thesis that gives them more status within the EA community and high-status jobs without measurable deliverables. Furthermore, if there’s no way to measure if AI safety research is making progress on protecting us from Catastrophically Capable AGI, then there’s no way to know if EA is making progress in risk reduction. I fail to see how this is effective.
EA’s current long term focus is antithetical to its goal of broad evangelism. Two hypothetical poll questions:
Would you like to belong to a movement that advocates for and devotes significant resources to reducing AI risk?
Would you like to work in the highest paying job you can get and then donate most of your income to an organization that espouses AI risk as the most tractable long term threat?
How do you think non-EA members would respond? Or to cater to EA biases, how do you think STEM graduates would respond? I suspect it would poll very badly. I suspect that a movement that tries to solve tomorrow’s problems in a way that doesn’t contribute to humanity’s wellbeing today will fail.
Systemic change looks intractable, but I believe it’s the most impactful way of changing the trajectory of civilization. I assume EA has avoided focusing on systemic change because the problem looks intractable, but institutional and governmental reform is in fact the most pressing problem for the future and today. A movement that espouses improving the wellbeing of humans today and in the future via systemic change has much broader appeal than “work hard in finance, then donate your money to the cause of saving tomorrow’s people.” In such a movement, each individual could choose the institution and work that most closely corresponded with their skills and interests.
I don’t know how to reform our institutions or improve our governance. That’s what I’m asking EA to consider doing. A first approximation of such a movement would look like:
develop principles for reforming institutions and political parties
develop principles for identifying and supporting politicians regardless of party who demonstrate:
the capacity to think systemically in their campaigns and policy choices
eschewal of maximalist thinking
a support for the wellbeing of all citizens
support individuals within institutions and parties that are working towards these goals
encourage community members to join the most critical government institutions and to run for office
almost every institution needs reform and improvement.
this encourages each member to choose work that feels meaningful and motivates them
advocate for election and party reforms
If EA considers systemic change too hard, the community should focus on effective charity and problems that more clearly contribute to solving today’s problems. If the EA community wishes to grow, it should avoid solving problems that do not improve the wellbeing of people today. If the EA community wishes to effect change, it should avoid causes where the results of the work can’t be measured.
If the EA community wishes to tackle humanity’s greatest problems, it must architect and execute a plan to reform our institutions and governments from within. This is an aspirational goal that can inspire a movement that changes the world.
Notes & References
https://www.wilsoncenter.org/article/populism-and-democracy
In economics, an externality or external cost is an indirect cost or benefit to an uninvolved third party that arises as an effect of another party’s (or parties’) activity. Externalities can be considered as unpriced goods involved in either consumer or producer market transactions. Air pollution from motor vehicles is one example.
I chose the US because it’s the second largest polluter in the world as well as having a high enough income to be able to afford a tax that transfers wealth from rich carbon emission consumers to the poor.
The optimal way to reduce consumption of a good with negative externalities is to raise it’s cost to more closely reflect those externalities via taxes.
According to the Yachting Pages, the longest Superyacht in the world, 180m M/Y Azzam, holds 1,000,000 litres of fuel. To put it into perspective, that is the equivalent of filling a regular hatchback car 23,800 times. Or, six Boeing 747 commercial airliners.
How the U.S. Fought the 1957 Flu Pandemic
It took Bancel and his Moderna team only two days to create the RNA sequences that would produce the spike protein, and 41 days later, it shipped the first box of vials to the National Institutes of Health to begin early trials. Afeyan keeps a picture of that box on his cell phone.
Moderna delivered the first doses of its Covid-19 vaccine to the NIH for testing on Feb. 24, 2020, and “the first Moderna shot went into a volunteer’s arm in Seattle on March 16, 2020,” according to Afeyan. After testing the Moderna vaccine on 30,000 volunteers, on Dec. 18, 2020, the FDA authorized it for emergency public use, and three days after that, the first Moderna vaccines were administered to front-line health workers, according to Afeyan.
Less than a decade ago, the United States had some 20 million doses of a new smallpox vaccine — also effective against monkeypox — sitting in freezers in a national stockpile.
Such vast quantities of the vaccine, known today as Jynneos, could have slowed the spread of monkeypox after it first emerged in the United States in mid-May. Instead, the supply, known as the Strategic National Stockpile, had only some 2,400 usable doses left at that point, enough to fully vaccinate just 1,200 people.
At several points federal officials chose not to quickly replenish doses as they expired, instead pouring money into developing a freeze-dried version of the vaccine that would have substantially increased its three-year shelf life.
BARDA has supported the development of a freeze-dried version of the vaccine with longer shelf-life to replace the stockpile and in 2017 awarded the Company a ten-year contract valued at USD 539 million for supply of freeze-dried vaccines to the SNS. Part of this contract (USD 37 million) has funded the Phase 3 study.
https://www.nytimes.com/2022/08/01/nyregion/monkeypox-vaccine-jynneos-us.html
https://www.nytimes.com/2022/08/01/nyregion/monkeypox-vaccine-jynneos-us.html
While Western countries have largely avoided other monkeypox outbreaks, African countries haven’t been so fortunate. Between November 2005 and November 2007, a study found that monkeypox cases in the DRC spiked 20-fold compared with the 1980s. In Nigeria, a severe 2017 outbreak occurred almost 40 years after the country’s last reported case. Again, the response outside Africa was minimal. “Why should the West care?” Tomori asks.
https://marginalrevolution.com/marginalrevolution/2017/04/no-great-submarine-stagnation.html & https://web.archive.org/web/20170501061602/http://www.en.netralnews.com/news/currentnews/read/4900/how.us.navy.must.be.in.indonesia..everywhere.at.once