How to make climate activists care for other existential risks
Hello, I am unpleasantly surprised by how few activists are concerned with the risks to humanity on the general/effective side compared to how many activists there are over climate, environment, etc. I think technology is a far bigger risk and can strike sooner than, for example, global warming. Additionally, AI will likely be able to solve nature’s problems easily. Therefore, I think that activists should focus primarily on research and development of safe technologies.
I am a beginning existential risks activist and I would like make some climate activists think and take their mission more broadly and effectively. But I certainly don’t want to make the situation worse, which is why I’m asking for your opinion here.
Convincing people what to be activists about has a huge potential, but it can also make things worse. Convincing people of anything carries the risk of various types of failure, and for example Toby Ord in his book The Precipice says that convincing others to fight for a cause other than the one they hold dear is (worse than) counterproductive.
Do you think that Ord is right? Do you think there is a type of communication with activists if I want them to see their mission a little differently?
I think that a big problem is, for example, their feeling that they belong somewhere, which for some can be the most important thing about their activism. I would take that one from them because existential risks activism is still weak and thus does not have a large community they could belong to.
Here is my plan:
contact several activists and talk to them. to test how willing they are to talk about the effectiveness of their activism.
draft an open letter to activists accordingly
ask individual activists how it affects them
correct it according to feedback
publish it
I think it would be important to emphasize that they does not have to abandon their cause, that it is enough to see it from a broader perspective. That we want the same thing (nice planet, nature, for people and animals), we just want to achieve it differently. That they don’t have to leave their climate community (in fact, it may be better if they stay in it to spread new ideas there). And that I appreciate their work and have no doubts about the importance of the environment. It is also not necessary to organize demonstrations etc., I think that at this stage it is enough for people to educate themselves and learn to perceive the risks. So I don’t want anything difficult from them.
I would appreciate your opinions on this topic. Above all, I am concerned with how not to damage our cause. Inefficient effort is not so bad and can be made more efficient. Alternatively, I would welcome more advice on how to be a good existential risks activist. Thanks
ExponentialDragon—this is such a timely, interesting, & important question. Thanks for raising it.
Tens of millions of young people are already concerned about climate change, and often view it as an existential risk (although it is, IMHO, a global catastrophic risk rather than an existential risk). Many of them are already working hard to fight climate change (albeit sometimes with strategies & policies that might be counter-productive or over-general, such as ‘smash capitalism’).
This is a good foundation for building concern about other X risks -- a young generation full of people concerned about humanity’s future, with a global mind-set, some respect for the relevant science, and a frustration with the vested political & corporate interests that tend to downplay major global problems.
How can we nudge or lure them into caring about other X risks that might actually be more dangerous?
I also agree that asking them to abandon their climate change friends, their political tribes, and their moral in-groups is usually asking them too much.
So how do we convince a smarter-than-average 22-year-old who thinks ‘climate change will end the world within 20 years; we must recycle more!’ into someone who thinks ‘climate change is really bad, and we should fight it, but also, here’s cause area X that is also a big deal and worth some effort’?
I’m not sure. But my hunch is that we need to piggy-back on their existing concerns, and work with the grain of their political & ideological beliefs & values. They might not care about AGI X-risk per se, but they might care about AI increasing the rate of ‘economic growth’ so quickly that carbon emissions ramp up very fast, or AI amplifying economic inequalities, or AI propaganda by Big Oil being used to convince citizens to ignore climate change, or whatever. Some of these might seem like silly concerns to those deeply involved in AI research… but we’re talking here about recruiting people from where they are now, not recruiting idealized hyper-rational decouplers who already understand machine learning.
Likewise with nuclear war as an X risk. Global thermonuclear war seems likely to cause massive climate change (eg through nuclear winter), and that’s one of its most lethal, large-scale effects, so there’s potentially strong overlap between fighting climate change due to carbon emissions, and fighting climate change due to nuclear bombs.
I think EA already pays considerable lip service to climate change as a global catastrophic risk (even though most of us know it’s not a true X risk), and we do that partly so we don’t alienate young climate change activists. But I think it’s worth diving deeper into how to recruit some of the smarter, more open-minded climate change activists into EA X risk research and activism.
Thanks! You seem to think about this topic like me, which gives me hope I am not alone. I am glad the world is full of people who want a better future for us and I think directing them to the right causes may be easier than to make new activists. I believe just as charities compete with each other, activism causes do as well right? Because there is only a limited number of activists.
Here’s how I imagine you might communicate with climate activists (at least based on how this post is written)
“Hey climate activists, I think you’re wrong to focus on climate, and I think you should focus on the risk from technology instead. I reckon you just need to think harder, and because you haven’t thought hard enough, you’re coming to the wrong conclusions. But if you just listen to me and think a bit better than you have done, you’ll realise that I’m right.”
If the pitch has this tone, even if it’s much less blatant than this, I fear that your targets might pick up on it and find it offputting.
I appreciate that you might communicate differently with climate activists than how you communicate on this forum, but I thought it worth flagging.
I love it!
Speaking as someone who started my country’s local FridaysForFuture, this is basically the same plan I had. If you go to my profile or have seen me at EAG, you’ll know this is an idea I can’t shut up about, because I think it’s super tractable!
Some comments:
I would prioritise finding people to collaborate with. You are currently one person, and finding even one or two other people to work with greatly increases odds of success. I’d be glad to chat more. You are not the first person I’ve heard seriously propose this idea :)
I was also worried about reputation risk, so I went around EAG asking AI Governance people whether the idea made sense. I expected mixed reception, but all the governance people I spoke to were very open to the idea (excited, even). Of course, they said they’d be careful outright supporting it due to reputation risks. However, x-risk activism generally suffers from Pluralistic Ignorance, so cautious openness is the best response I could’ve hoped for anyway.
Re: Fighting for a different cause, I think that’s less concerning than you make it out. In climate advocacy, intersectionality is a fairly common thing. Concepts like decolonisation, climate justice, animal rights and degrowth are all only tangentially related to climate as an x-risk, but they’re well-received anyway. I do not think this specific concern is significant enough to hinder outreach.
Re: Community. The short answer is that I agree. However, I myself made the jump, and I think either having a close friend passionate about the issue, or engaging with active communities/programmes focused on the issue isn’t a super difficult bar to clear.
Re: your plan. That’s basically the exact same plan I have. The difference is that I was going to focus on anti-AI advocates who are gaining traction after Stable Diffusion/ChatGPT. But it would apply to both.
Re: Demonstrations. I half-agree. In activism there’s this informal concept of escalation. Basically you start with very friendly and low-stakes outreach (education, social events, outreach to officials). However, about 90% of people who default to this, are biased against escalating to demonstrations even when it makes sense. For example, Greta Thunberg’s actions would have seemed incredibly counterintuitive to most climate advocates in 2018, and a lot of people also just value activism/ demonstrations at zero or negative because they don’t see the social value. Basically, I hear this often enough that in my mind, the effectiveness of demonstrations is generally always undervalued.
Overall, from my context in climate advocacy, I think people underrate how reasonable others are, especially other high-engaged activists. I expect EAs will be surprised at how receptive climate activists are. Climate activists care a lot about engaging with important ideas and mobilising to do good, and I find they respond (relatively) positively to EA/longtermist ideas. In fact, I know a lot of EAs who used to work in the climate space and entered through other cause areas like animal rights, alt proteins or global poverty alleviation. Like you mentioned, there’s also concepts of regulation, social equity and skepticism of large corporations that could also be leveraged to find common ground.
Anyway, would love to chat with you and anyone else who finds this idea compelling!
Thank you for your supportive comment! Now I see it’s probably not that bad idea. I sure would like to chat with you too! Feel free to message me. I am very curious about your findings.
The Existential Risk Observatory aims to inform the public about existential risks and recently published this, so maybe consider getting in touch with them.
thanks for your kind comments, I was afraid that my idea is a complete nonsense 😅