“Holy Shit, X-risk” talk
The following is the transcript for a talk I gave at the recent Longtermist Organizer Summit titled “Holy Shit, X-risk.” It presents an unusually emotional case (for EA standards) for x-risk reduction, following my post on why I think EA could benefit from emphasizing emotional altruism. It got good feedback, so I’m sharing it here with a few caveats:
This was given to a longtermist audience, so I assumed a philosophical buy-in on x-risk reduction that might not be fitting for other audiences.
Attendees’ mood and setting matter a lot for a talk like this.
Based on feedback I received, I would change parts of this talks and some of the overall vibe if I gave it again (e.g., making it more personal, tweaking the reflection prompt). DM me if you’re considering giving this talk or something similar.
I appreciate Emma Abele, Luise Wöhlke, Anjay Freidman, Lara Thurnherr, Luke Moore, Jay Kim, Kris Chari, Lenni Justen, and Miranda Zhang for their helpful comments on my draft and delivery.
Update 10/27/22: Sometimes when I think back to this talk I think ahh that may have been too moralizing/ too emotionally pushy/ too self-aggrandizing/ too look at how much I care/ not enough buy-in from audience on this emotional framing. I’m glad I experimented with this talk and learned from it, but I might present it differently if I did it again.
Introduction
Today, I want to talk to you about minimizing existential risk – a goal that you and I share.
I want to let the goal of minimizing existential risk sink in a bit; really look it in the face. I also want to talk about some attitudes that I think are helpful to approach this problem. After the talk, we’ll be doing a little 10-minute summit intention/goal-setting activity
If at any point during the talk you need to get some fresh air, use the restroom, or whatever, feel free to leave the room whenever.
X-risk Awfulness
Let’s start with a little reflection: Take a minute to think about things you cherish in the world. Things you love; things that make you feel warm and fuzzy or just fill you with awe. I’ll give you a minute.
Let’s hear a few of them. Can people raise their hands and share some of the things that came to mind?
[...]
Ok, one more reflection: Think about what you want the world to look like by 2100. Think about the problems you want humanity to have overcome by then – areas where we can so clearly do better. What do you hope are some attributes of this world? I’ll give you a minute.
Alright, let’s hear a few again.
[...]
Thanks again for sharing.
Throughout this talk, I encourage you to keep in mind the concrete things you just thought about: the things you cherish about existing, and the problems you hope we will overcome.
I’m going to be talking about ‘existential risk’, and I’m worried that this concept can sometimes remain too abstract.
It can sometimes feel like existential risk is in some sense “out there,” removed from our immediate experience walking through this world. The concept of existential risk can oftentimes be just that… a concept.
I don’t think pondering existential risk in abstraction does justice to how awful an existential catastrophe would be.
When you think about existential risk, I encourage you to ground some part of it in your experience. Attach to it the things that you cherish from earlier – that light up your life. Attach your hopes for the progress that we, humanity, can still make.
Take a moment.
And then recognize that an existential catastrophe could extinguish all of that light.
It could extinguish that light for everyone alive today.
And for everyone else who could possibly be alive someday
I hate to be so dark here, but that’s kind of it.
An existential catastrophe is the irreversible end of any flourishing today, and the lock-in of a future that can never live up to what humanity could have achieved. It is a future void of all the things we collectively cherish about this world. It is a future where the progress we know today is possible, is never realized. And if some form of humanity lives beyond an existential catastrophe, it is a crippled, locked-in humanity that can never fully rise to its feet again.
Most likely, an existential catastrophe also spells out death for at least 8 Billion humans and countless other sentient beings.
I know how scope insensitive I am – we all are –, so whenever I say a number bigger than a million – let alone a billion – I’m painfully aware of our inability to extract the meaning behind such large numbers.
Our emotions are just not hardwired to accurately represent suffering and loss at the scale of an existential catastrophe.
But, to the extent you feel comfortable and like it could be helpful, I encourage you to sometimes try.
Try to cultivate a visceral attachment to the things you intellectually care about.
There are different strategies for this I want to share. I’m not going to nudge you to try these now – I think this is a pretty vibes-dependent things and something you should only do IF and when it feels appropriate.
Nevertheless, I’ll share two strategies for emotionally orienting towards X-risk. First, You might do this by bringing to mind how moved you can be by one person’s quest for a better life and how much you would do to alleviate their suffering. And then confront what this suggests you might do for the whole world – if only you could accurately represent each individual among the masses.
Alternatively, you might think again about the things you cherish and your hopes for a better future and recognize how an existential catastrophe extinguishes all that’s good and still possible.
I’ll share something that motivates me: thinking about humanities’ self-awareness and progress over time. We’re evolved apes. That’s always a nice reminder: we’re evolved apes, guys. Over the past centuries, we’ve undergone these remarkable shifts in self-awareness and really just gotten quite meta about our existence. We’ve understood we’re not the center of the universe, the sun doesn’t revolve around the earth; that the universe is far bigger than we could have imagined; we’ve recognized our common ancestory with all other beings and fellow humans and increasingly seeing our fellow humans as fellow humans – not part of some other tribe. Throughout all of this, we’ve also created beautiful artwork and writing that grapples with this peculiar human condition we find ourselves in.
An existential catastrophe would just erase all of that. It ends the story; It drops the torch. It just leaves this emptiness that feels so wrong, relative to how far we’ve come and how far we could go.
There’s some time after this talk to reflect on why you care about reducing existential risk. I don’t want to pressure you to feel some certain way that doesn’t feel genuine, and I want to acknowledge that sometimes these feelings can feel overwhelming or unwelcome – so engage with these exercises as much as they feel appropriate.
I’ll be around at the summit for the rest of today and am happy to talk about how people are connecting to X-risk. For now I’ll press on.
X-risk likelihood
I want to invite you to make one more mental move concerning your thinking about existential catastrophes.
Realize that existential catastrophes are actually possible.
I think most of you agree with what I just said about how bad an existential catastrophe would be upon reflection. Of course, you might say – of course I want humanity to flourish. Of course the destruction of everything that is and could be would suck.
But still, even if you agree that an existential catastrophe would be awful, it’s possible to not really feel any urgency around that – any sense that something is wrong and needs fixing.
I encourage you to check if you’re subconsciously subscribing to the expectation that things will be fine – That the problem will be solved by some other people.
What if they don’t?
There’s no rule that says we make it. There’s no rule that says we get through this century.
You probably agree with me here again. You’re probably like, of course there’s no rule.
But I think even if you agree with me here, it is still really easy to not actually entertain the idea of failure. We’ve made it this far, you might think. There’s a path where humanity perseveres against all the challenges it faces. There is a happy ending, like in all those stories and movies where we beat the bad guys, aliens, or AIs.
There is a happy ending, that’s why I’m here. But there’s also just an ending.
There’s also a very real possibility that we don’t make it. Toby Ord in the Precipice gives us a one in six chance of going extinct this century, and some people think that the situation has gotten even more dire since he made those estimates.
1-in-6?
I wouldn’t want to play Russian roulette for anything I care about, and certainly not for everything I care about.
Emphasize importance of figuring it out yourself
This has really been a cheerful talk, right?
I make you contemplate how bad an existential catastrophe would be, and then I remind you just how very possible it is.
I don’t do this because I want to stress you out.
(Well, at least that’s not like the main reason I’m doing this.)
I am giving this talk because I want you to hone your focus on the goal that I think all of you endorse: minimize existential risk. To the extent that it feels true to your philosophical beliefs, I want you to clarify that goal to yourself, and feel the corresponding drive.
And I want you to own this goal.
At some level, I want you to feel like reducing existential risk is on you.
It took me far too long to have any sense of this ‘it’s on me’ feeling, but I’ve come to see it as really transformative. Especially as community builders, it can be easy to feel like you’re just following the script. Like you’re just executing on the things other people tell you are important. Oo look! FTX mega projects, the opinions of high-status EAs!
I started my journey in EA thinking everyone else had the answers.
Ahh, what’s the best way to reduce existential risk? Follow what 80,000 Hours says.
Ahh, what should I do as a university student? Build an EA student group and run the intro fellowship, duh.
As I got closer to the people who I thought had all the answers, I realized no one had all the answers.
A lot of people have well-backed views that you can borrow from, but reducing existential risk is just an incredibly complex problem. Solving it can’t be boiled down to step by step instructions. No one can give you a playbook, and your starting point in terms of experience and aptitudes will always be different than everyone else’s.
I claim that the people who are going to do the most to safeguard humanity – which could well be all of you – are the people who feel that reducing existential risk is on them; the people who realize when other people don’t have all the answers and have the audacity to seek them themselves.
I think there’s another important, standout quality among people who want to reduce X-risk, specific to us community builders: Remember what we’re community building for. We’re building the effective altruism community to solve problems in the real world. We care about those problems – about actually influencing reality – , not effective altruism in and of itself. (Although I do care about you all in and of yourself).
The first time I heard advice in this vein of ‘figure it out yourself, build your own understanding of real world problems’ it stressed me out a bit. I think that’s a very natural reaction– and if you’re feeling a bit stressed, know that you’re not alone. It’s really really understandable to be stressed and sometimes wish that things were being taken care of and that you didn’t have to deal with it.
Building your own understanding is hard and intimidating.
But ask yourself this: What’s stopping you? What’s holding you back from just actually looking into the problems and the cruxes yourself – and then doing the thing?
If it’s “not enough time”, I think you’re underestimating the benefits that come from actually having a sound reason for why what you’re doing is the best thing to do to reduce existential risk. It’s really easy to slip into grooves of motivated reasoning when trying to answer this question to yourself.
I notice myself and too many of my peers still operate with the implicit assumption that other people have this existential risk thing figured out, and that they for some reason couldn’t become an expert in a problem area.
I think that’s bull shit. There’s no law written down somewhere that says you can’t become an expert in some aspect of existential risk or start a new ambitious project. There’s no divine law that says you’re only qualified to follow the perceived play book of EA community building. There’s no wall between you and doing the thing.
There’s a quote I love that I think encapsulates this daring energy of going out there, figuring out the problem, and doing something about it: “The only rules are the rules of physics.”
Give yourself permission to reach your own conclusions, and to act on the conclusions you arrive at, unconstrained by the things you think you can’t do.
Empower
Each of you is capable of this. Each of you has a track record of doing cool things that landed you here in the ~mystical~ Berkeley, California.
You deserve to be here, and you can go on to incredible things for this world if you commit yourself to it. There’s no one type of person who can solve x-risk– no prototype which you can size up to. Each of you has a set of unique talents or aptitudes that you can leverage to improve the world.
The main thing is, in my eyes, that you give yourself permission to try.
And what a wonderful place the EA and longtermist community is to try. What a wonderful network of people, norms, and resources we have at our side. There are loads of people, including people at this summit, who would gladly talk with you if you have ideas that can change the world; it’s OK if you try something ambitious and come up short; and if your idea is compelling enough, money need not be the reason why it can’t happen.
Conclusion
I like considering the longtermist community in the grand scheme of humanity’s timespan. There are 100 billion people behind us, 8 billion alongside us, and countless ahead of us – if we can safely navigate this Precipice.
So much could depend on this century and what we do over the course of our lives. The task at hand is bigger than any of us could handle alone. But together, we stand a chance. And we gotta step to the challenge. If not us, who else? If not you, who else?
As you go through this weekend and beyond, connect with the things you cherish and your hopes for humanity. Connect with why it is that you care about reducing existential risk and building a better future. Feel it, and feel some sense that it is on you – that no one else can give you all the answers. Cultivate that audacity to understand reality and what you can do to shape it.
I’ll close with a quote from Joseph Carlsmith, a senior researcher at Open Philanthropy and a wonderful writer:
You can’t keep any of it; there’s nothing to hold back for; your life is always flowing outwards, through you and away from you, into the world; the only thing to do is to give it away on purpose, and the question is where and to what.
Thank you.
- 1 Sep 2022 14:56 UTC; 14 points) 's comment on Effective altruism is no longer the right name for the movement by (
This may be an idiosyncratic reaction, but to me these appeals where you ask the audience to imagine things they cherish and care about don’t work so well. Maybe there’s too much “tell” and not enough “show.” (Or maybe I’m just a bit averse at having to think up examples on my own during a presentation.)
I would prefer a list of examples where people can fill in personal details, for instance, something like this:
“Imagine the wedding of a friend, the newborn child of a colleague at work, the next accomplishment in a meaningful hobby of yours, such as climbing difficulty level 5 at the local climbing gym...” Etc. Maybe also a bit of humor and then turn to dead serious again, for instance:
″… the next movie or TV series that captures audiences of all ages. And yeah, you’re probably thinking, <<the next installment of putting attractive people on an island to make reality TV>> and maybe you joke that the shallowness on display means we deserve to go extinct. But let’s keep this serious for a bit longer – people sometimes use humor as a deflection strategy to avoid facing uncomfortable thoughts. If an existential risk hit us, we could never make fun of trashy TV ever again. Even the people in that reality TV show, probably they have moments of depth and vulnerability – we don’t actually think it would be a good thing if they had to go through the horrors of a civilizational collapse!”
I think a lot of people – at least on an intuitive level – don’t feel like a really good future is realistic, so they may not find this framing compelling. Perhaps the progress narrative (as argued for somewhat convincingly in “Better Angels for Our Nature”) was intuitively convincing in 2015, but with Trump, the outgrowths of social justice ideology, the world’s Covid response, and the war in Ukraine, it no longer seems intuitively believable that we’re trending towards moral progress or increased societal wisdom. Accordingly, “the progress we know today is possible” is likely to ring somewhat hollow to many people.
If you want the talk to be more convincing, I recommend spending a bit of time arguing why all hope isn’t lost. For instance, I would say something like the following:
“The upcoming transition to an AI-run civilization not only presents us with great risks, but also opportunities. It’s easier to design a new system from scratch than to fix a broken system – and there’s no better timing for designing a new system than having superintelligent AI advisers to help us with it. It’s a daunting task, but if we somehow manage the improbable feat of designing AI systems that care about us and the things we care about, we could enter, for the first time ever, a trajectory where sane and compassionate forces are in control over the future. It’s hard to contemplate how good that could be. [Insert descriptions of what a sane world would be like where resources are plenty.]”
Edit: I guess my pitch will still leave people with skepticism because the way I put it, it relies strongly on outlandish-seeming AI breakthroughs. But what’s the alternative? It just doesn’t seem true that the world is on a great trajectory currently and we only have to keep freak accidents from destroying all the future’s potential value. I think the existential risk framing, the way it’s been common in Oxford-originating EA culture (but not LW/Yudkowsky), implicitly selects for “optimism about civilizational adequacy.”
Hm, one alternative could be “we have to improve civilizational adequacy” – if timelines are long enough for interventions in that area to pan out, this could be an important priority and part of a convincing EA pitch.
Thanks for the thoughtful comment! I like the list where people can fill in personal details, and agree that humor can be a welcome (and useful) addition here.
I also appreciate the point that imagining a good future might be hard, given the current state of the world. The appeal to an AI-enabled better future could land with some EA audiences, but I think that would feel like an outlandish claim to many. I guess I (and maybe others?) have a bit more faith that we could build a better future even without AGI. Appealing to the trajectory of human progress would be a supporting argument here that some might be sympathetic to.