I want to let the goal of minimizing existential risk sink in a bit; really look it in the face.
This may be an idiosyncratic reaction, but to me these appeals where you ask the audience to imagine things they cherish and care about don’t work so well. Maybe there’s too much “tell” and not enough “show.” (Or maybe I’m just a bit averse at having to think up examples on my own during a presentation.)
I would prefer a list of examples where people can fill in personal details, for instance, something like this:
“Imagine the wedding of a friend, the newborn child of a colleague at work, the next accomplishment in a meaningful hobby of yours, such as climbing difficulty level 5 at the local climbing gym...” Etc. Maybe also a bit of humor and then turn to dead serious again, for instance:
″… the next movie or TV series that captures audiences of all ages. And yeah, you’re probably thinking, <<the next installment of putting attractive people on an island to make reality TV>> and maybe you joke that the shallowness on display means we deserve to go extinct. But let’s keep this serious for a bit longer – people sometimes use humor as a deflection strategy to avoid facing uncomfortable thoughts. If an existential risk hit us, we could never make fun of trashy TV ever again. Even the people in that reality TV show, probably they have moments of depth and vulnerability – we don’t actually think it would be a good thing if they had to go through the horrors of a civilizational collapse!”
It is a future where the progress we know today is possible, is never realized.
I think a lot of people – at least on an intuitive level – don’t feel like a really good future is realistic, so they may not find this framing compelling. Perhaps the progress narrative (as argued for somewhat convincingly in “Better Angels for Our Nature”) was intuitively convincing in 2015, but with Trump, the outgrowths of social justice ideology, the world’s Covid response, and the war in Ukraine, it no longer seems intuitively believable that we’re trending towards moral progress or increased societal wisdom. Accordingly, “the progress we know today is possible” is likely to ring somewhat hollow to many people.
If you want the talk to be more convincing, I recommend spending a bit of time arguing why all hope isn’t lost. For instance, I would say something like the following:
“The upcoming transition to an AI-run civilization not only presents us with great risks, but also opportunities. It’s easier to design a new system from scratch than to fix a broken system – and there’s no better timing for designing a new system than having superintelligent AI advisers to help us with it. It’s a daunting task, but if we somehow manage the improbable feat of designing AI systems that care about us and the things we care about, we could enter, for the first time ever, a trajectory where sane and compassionate forces are in control over the future. It’s hard to contemplate how good that could be. [Insert descriptions of what a sane world would be like where resources are plenty.]”
Edit: I guess my pitch will still leave people with skepticism because the way I put it, it relies strongly on outlandish-seeming AI breakthroughs. But what’s the alternative? It just doesn’t seem true that the world is on a great trajectory currently and we only have to keep freak accidents from destroying all the future’s potential value. I think the existential risk framing, the way it’s been common in Oxford-originating EA culture (but not LW/Yudkowsky), implicitly selects for “optimism about civilizational adequacy.”
Hm, one alternative could be “we have to improve civilizational adequacy” – if timelines are long enough for interventions in that area to pan out, this could be an important priority and part of a convincing EA pitch.
Thanks for the thoughtful comment! I like the list where people can fill in personal details, and agree that humor can be a welcome (and useful) addition here.
I also appreciate the point that imagining a good future might be hard, given the current state of the world. The appeal to an AI-enabled better future could land with some EA audiences, but I think that would feel like an outlandish claim to many. I guess I (and maybe others?) have a bit more faith that we could build a better future even without AGI. Appealing to the trajectory of human progress would be a supporting argument here that some might be sympathetic to.
This may be an idiosyncratic reaction, but to me these appeals where you ask the audience to imagine things they cherish and care about don’t work so well. Maybe there’s too much “tell” and not enough “show.” (Or maybe I’m just a bit averse at having to think up examples on my own during a presentation.)
I would prefer a list of examples where people can fill in personal details, for instance, something like this:
“Imagine the wedding of a friend, the newborn child of a colleague at work, the next accomplishment in a meaningful hobby of yours, such as climbing difficulty level 5 at the local climbing gym...” Etc. Maybe also a bit of humor and then turn to dead serious again, for instance:
″… the next movie or TV series that captures audiences of all ages. And yeah, you’re probably thinking, <<the next installment of putting attractive people on an island to make reality TV>> and maybe you joke that the shallowness on display means we deserve to go extinct. But let’s keep this serious for a bit longer – people sometimes use humor as a deflection strategy to avoid facing uncomfortable thoughts. If an existential risk hit us, we could never make fun of trashy TV ever again. Even the people in that reality TV show, probably they have moments of depth and vulnerability – we don’t actually think it would be a good thing if they had to go through the horrors of a civilizational collapse!”
I think a lot of people – at least on an intuitive level – don’t feel like a really good future is realistic, so they may not find this framing compelling. Perhaps the progress narrative (as argued for somewhat convincingly in “Better Angels for Our Nature”) was intuitively convincing in 2015, but with Trump, the outgrowths of social justice ideology, the world’s Covid response, and the war in Ukraine, it no longer seems intuitively believable that we’re trending towards moral progress or increased societal wisdom. Accordingly, “the progress we know today is possible” is likely to ring somewhat hollow to many people.
If you want the talk to be more convincing, I recommend spending a bit of time arguing why all hope isn’t lost. For instance, I would say something like the following:
“The upcoming transition to an AI-run civilization not only presents us with great risks, but also opportunities. It’s easier to design a new system from scratch than to fix a broken system – and there’s no better timing for designing a new system than having superintelligent AI advisers to help us with it. It’s a daunting task, but if we somehow manage the improbable feat of designing AI systems that care about us and the things we care about, we could enter, for the first time ever, a trajectory where sane and compassionate forces are in control over the future. It’s hard to contemplate how good that could be. [Insert descriptions of what a sane world would be like where resources are plenty.]”
Edit: I guess my pitch will still leave people with skepticism because the way I put it, it relies strongly on outlandish-seeming AI breakthroughs. But what’s the alternative? It just doesn’t seem true that the world is on a great trajectory currently and we only have to keep freak accidents from destroying all the future’s potential value. I think the existential risk framing, the way it’s been common in Oxford-originating EA culture (but not LW/Yudkowsky), implicitly selects for “optimism about civilizational adequacy.”
Hm, one alternative could be “we have to improve civilizational adequacy” – if timelines are long enough for interventions in that area to pan out, this could be an important priority and part of a convincing EA pitch.
Thanks for the thoughtful comment! I like the list where people can fill in personal details, and agree that humor can be a welcome (and useful) addition here.
I also appreciate the point that imagining a good future might be hard, given the current state of the world. The appeal to an AI-enabled better future could land with some EA audiences, but I think that would feel like an outlandish claim to many. I guess I (and maybe others?) have a bit more faith that we could build a better future even without AGI. Appealing to the trajectory of human progress would be a supporting argument here that some might be sympathetic to.