How the Ukraine conflict may influence spending on longtermist projects
Abstract
The ongoing war in Ukraine is a strong sign that it is unlikely that we enter an era of increased international cooperation. Spending on longtermism and on the reduction of existential risks should reflect this fact.
Epistemic status: This is my first long post and it is a rather spontaneous thought. At the end I will mention a few reasons why I could be wrong.
Introduction
If you value the longterm survival and flourishing of humanity, the current level of existential risk is much too high. Toby Ord estimates in “The Precipice” that there is a probability of 1⁄6 within this century for a catastrophe that destroys humankinds longterm potential. If the chance that civilization survives as long as the earth is habitable should be non-negligible, this level is too high by several orders of magnitude. Reducing existential risk to a low level becomes much easier in the presence of strong international organisations that are capable to enforce evidence-based decisions.
Unfortunately, this is not the current state of world politics. Moreover, we are not even moving in the right direction. In my personal opinion, the relations between the West and Russia will be very bad for the coming decades even if the war in Ukraine will end soon. The tensions between the USA and China are rising, too.
While I think that it is true that strong international institutions are necessary to achieve existential security in the long run, you should not base your plans on the assumption that international cooperation will be strong in the next decades. Therefore, it could be wise to shift some money from interventions that require international cooperation to interventions that can be implemented on a national level. Just as a manager of an investment fond will change its portfolio in face of a global crisis, the criteria for funding new projects could be adapted.
Below, I will give a few examples what this could mean for different types of existential risks. These examples should be taken with a grain of salt. Experts have probably better ideas.
Climate change
The Russian economy depends strongly on natural gas and other fossil fuels. It could happen that the Russian government will try to sabotage the Paris agreement since they want to sell their natural gas abroad. Maybe they will join forces with the OPEC countries and try to delay the transition to renewables. Therefore, international diplomacy could become less effective and Direct Air Capture and the development of cheap CO2-neutral industrial processes become more important.
Nuclear security
Since nuclear weapons are Russias life insurance now, global nuclear disarmament is highly unlikely. I think it is fair to say that the value of donations to ALLFED has risen in recent weeks.
Synthetic pandemics
I am sure that no government wants a pandemic that kills all of humankind. However, global monitoring becomes much harder in an atmosphere of mutual distrust. Therefore, early detection on a national level, the development of broadband anti-virals and stockpiling protective equipment could become more effective.
Artificial intelligence
At the moment, it is unlikely that there will be a ban on fully autonomous weapons or a global agreement to implement best practices for AI. Moreover, it could happen that a AGI could be used in a military conflict before the alignment problem is fully solved. Therefore, it could make more sense as before to speed up the development of a friendly AGI in the West. I am aware that is approach is very risky, but there is also the risk that AGI will be developed by a country that is more interested in military effectiveness than safety. To be honest, this country could be the USA, too.
Reasons why I could be wrong
My outlook on politics could be too pessimistic.
There are cheap, neglected interventions for increasing international cooperation.
Longtermist organizations are already spending not much money on interventions that rely on international coordination.
Reducing existential risk without some sort of global agreement is not feasible; even in the short term, and increasing international cooperation is still the best way forward.
I think it’d be bizarre if the war in Ukraine didn’t shift out funding priorities in some way. WW3 now appears likelier, and likelier to happen sooner. Presumably, this should shift more of a focus towards A. preventing it and B. minimizing its harm.
This is a good post. I like your simple thesis with clear actionable takeaways.
Agreed on all of these except climate change. I think the Russia/Ukraine war will probably result in Europe and America investing more in energy technology (which will mostly be green and low-carbon energy), and will probably raise the price of oil (ie lowering the total amount of oil that gets burned), such that we might come out ahead on our climate goals. I suppose my disagreement with you is that I see progress on climate/energy as significantly driven by technology and economics, so it is okay if we take a hit on international coordination and unity in exchange for getting better green-energy tech. But others could certainly disagree here!
The effects on nuclear and biorisk seem pretty direct, as you outlined. The effect on AI seems more indirect, but AI is so important that possibly this is still the biggest effect of the bunch. If the world is just heading towards less trust / less coordination / more militarism / more great-power competition / more friction between the USA and China, that is probably bad for humanity in a lot of ways, including AI x-risk as you described.