I’m a recent graduate of a Yale M.A. program in global affairs and public policy. Before coming to Yale I served four years as a US Army officer. Before that I studied political science and economics at Johns Hopkins. I love travel, sports, and writing, especially about the moral implications of policy issues.
I was first drawn to EA to maximize the impact of my charitable giving, but now use it to help plan my career as well. My current plan is to focus on U.S. foreign policy in an effort to mitigate the danger that great power competition can have as a cross-cutting risk factor for several types of existential threats. I also love Give Directly, and value altruism that respects the preferences of its intended beneficiaries.
Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.
We can’t know exactly how this would happen—but to make it less abstract, let’s imagine some possibilities. Any AI with internet access may be able to save millions of copies of itself on unsecured computers all over the world, each ready to wake up if another were destroyed. This alone would make it virtually indestructible unless humans destroyed the internet and every computer on Earth. Doing so would be politically difficult in the best case—but especially so if the AI were also using millions of convincing disinformation bots to distract people, conceal the truth, or convince humans not to act. The AI may also be able to conduct brilliant cyber attacks to take control of critical infrastructures like power stations, hospitals, or water treatment facilities. It could hack into weapons of mass destruction—or, invent its own. And what it couldn’t do itself, it could bribe or blackmail humans to do for it by seizing cash from online bank accounts.
For these reasons, most AI experts think advanced AI is much likelier to wipe out human life than climate change. Even if you think this is unlikely, the stakes are high enough to warrant caution.