A post from EA at Harvard from 2017 recommends the following:
Working with local partners to advocate against coal power in China, India and Southeast Asia
Growing capacity and coordination at state and local levels in the U.S
Contributing to one or more climate philanthropy bodies that strategically target climate finance interventions
No worries—edit made.
Interesting idea! It might be nice to embed the image, or maybe multiple images. If you don’t know how to do that, you can do that by uploading the image to imgur, writing a word like photo, selecting it then choosing the image icon. You can then resize the image by dragging it.
I think that sounds like a great idea. You could put forward a proposal on the EA forum, with a form for people to express interest, and share it to other places where the EA survey respondents expressed an interest. If the EA survey data is accurate, I’d expect you’d have a decent level of interest to get it running.
In 2019, I planned to donate 5% of my income. I used payroll donations in the following proportions of this 5% for three months: EA Funds Animal Welfare 5%, LTFF 35%, EA Meta 30%, ALLFED 25%, and due to reading this, CFRN 5%. Then I became more concerned about GCRs, and switched to 50% to GCRI and ALLFED, again through EA Funds.
I used my old company’s matching scheme to provide £500 (plus GiftAid) through EA funds to ALLFED, which was free of charge for me. I donated £100 to Climate Outreach when they had a week of matching. I’ve also previously donated £20/month to the Vegan Society, because of their public campaigns to increase the availability of plant-based food, but I stopped donating there so I could invest more in GCR reduction.
In the last few months of the year, I watched Phil’s talk about optimal philanthropy and decided that a. I was in an optimal stopping problem where I hadn’t explored enough options yet, and b. that there may well have been higher marginal benefits to future spending on x-risks. Since then, I’ve maintained a spreadsheet of my income (of which I’ve spent about 35%), and have invested the rest using this advice.
I tentatively plan to donate to long-term causes, but potentially not any time soon, once I’ve done more research on the most tax-efficient way to invest and donate. For 2020, my only outgoing donations so far have been to CATF and CFRN because of this talk on climate and x-risk, which I’m planning to write up in a forum post soon.
Hey there, interesting article! In this talk from the most recent EA Global, Niel Bowerman (climate physics PhD and now AI specialist at 80,000 Hours) gives some thoughts on the relationship between climate change and existential risk. Essentially I think that there’s some evidence about point 2 on your list.
In his talk, Niel argues that climate change could cause human extinction in itself, under some scenarios. These are quite unlikely, but have non-zero probabilities. When we consider that emissions are likely to increase well beyond 2100, beware the 2100 fallacy of cutting shorts impact analyses at an arbitrary point in time.
The larger contributions very roughly are probably from climate change contributing to social collapse and conflict, which themselves lead to existential risks. Toby Ord has called this an ‘existential risk factor’. I think the question isn’t “Is climate an existential risk?” but “Does climate change contribute to existential risk?” in which case, it seems that the sign might be yes. Or perhaps “Is climate change important in the long-term?” in which case, if we’re thinking across multiple centuries, even with lots of technological development, if we’re looking at >6C in 2300 (to pick an example), then I think the answer is yes.
All of this being said, I still think it’s a fair to argue that AI, bio, and nuclear are more neglected and tractable relative to climate change.
What do you think of Niel’s talk and this framing?
You might find this book chapter interesting - ‘Twenty-Seven Thoughts About Multiple Selves, Sustainable Consumption, and Human Evolution’ by Geoffrey Miller.
How would effective altruism be different if we’re living in a simulation?
I think gifted teenagers should be aware of the subjectivity and complexity of history and narratives. So my choices are geared around challenging existing narratives. I’ve also made an effort to choose some female authors.
1. Howard Zinn—A People’s History of the United States
An absolute blast of revisionist history, critiquing the American Dream from a range of angles. It is contentious, but that’s the point—to provoke a debate.
What struck me as I began to study history was how nationalist fervor—inculcated from childhood on by pledges of allegiance, national anthems, flags waving and rhetoric blowing—permeated the educational systems of all countries, including our own. I wonder now how the foreign policies of the United States would look if we wiped out the national boundaries of the world, at least in our minds, and thought of all children everywhere as our own. Then we could never drop an atomic bomb on Hiroshima, or napalm on Vietnam, or wage war anywhere, because wars, especially in our time, are always wars against children, indeed our children. Howard Zinn, A People’s History
2. The Golden Notebook—Doris Lessing
I’ve heard several people in EA dismissing fiction. This is ridiculous. Fiction has a lot to teach us about our own thought processes, the lives of others, and the cultures we live in. TGN is feminist and anti-war, and especially considering Lessing’s non-standard educational background, the prose is utterly brilliant.
Ideally, what should be said to every child, repeatedly, throughout his or her school life is something like this: ’You are in the process of being indoctrinated. We have not yet evolved a system of education that is not a system of indoctrination. We are sorry, but it is the best we can do. What you are being taught here is an amalgam of current prejudice and the choices of this particular culture. The slightest look at history will show how impermanent these must be. You are being taught by people who have been able to accommodate themselves to a regime of thought laid down by their predecessors. It is a self-perpetuating system. Those of you who are more robust and individual than others will be encouraged to leave and find ways of educating yourself — educating your own judgements. Those that stay must remember, always, and all the time, that they are being moulded and patterned to fit into the narrow and particular needs of this particular society. Doris Lessing, The Golden Notebook
3. To Kill a Mockingbird—Harper Lee
Hopefully this is on every school reading list on Earth, but just in case not, then I’ll back it here. I cry every time I read Atticus’ speeches.
I wanted you to see what real courage is, instead of getting the idea that courage is a man with a gun in his hand. It’s when you know you’re licked before you begin, but you begin anyway and see it through no matter what. Harper Lee, To Kill a Mockingbird
Yes, lots. A good book is Global Catastrophic Risks by Bostrom and Circovic. The first chapter is available from Bostrom’s website as a PDF. The website 80,000hours.org has a problem profile page. And Toby Ord is working on a new book. He mentions it in his 80k podcast interview, and recommends the book ‘The End of the World’ by John Leslie. Also Phil Torres publishes on this, and has just started at CSER, who also have a big research agenda.
Yes, the urgency point could indeed fall within the importance lens as you suggest. My concern was that some crude measures of importance didn’t consider this interactive effect in a dynamic world.
In Owen C-B’s ‘Prospecting for Gold’ talk, he briefly talks about urgency as part of tractability (something tractable now could be less tractable in the future).
Thanks for your comments.
Do you also see it as something which should redirect resources currently being spent on long-termist causes?
No, I think that funding averted from AI alignment to climate change would be a mistake. But optimising money currently spent on climate change could be useful.
This is a small nitpick, but I don’t think I’ve ever seen the claim substantiated that EA’s focus has been unduly influenced by “the short-term world of hedge funds”, even though people make it all the time. Yes, GiveWell was founded by hedge-fund veterans, but the tools they borrowed from Bridgewater were (as far as I know) related to EV calculations, not “having a short time horizon”. EA has, almost since the beginning, had a stronger focus on the long-term future than nearly any other social movement.
Yes, it felt a little harsh of me to have written that. I agree—it’s a bit of strawman argument. I think what I was thinking there is perhaps better expressed in the quote from Christine Peterson.
TL;DR—maybe from an impact perspective your point makes sense, but I just find eating animals gross. (Also in B4 here comes a vegan)
My perspective (as someone who is vegan) who has learned about animal suffering is that the consumption of animal products, especially things like chicken breast, or other foods where you can see the bones and mechanics of it being from animal, makes the food very unappetising. Of course, the biggest impacts on animal welfare are through institutional change, wild animal suffering, and the long run future of life, but that doesn’t stop eating a piece of meat from just feeling pretty unpleasant. Given that pigs are of comparable intelligence to dogs, a good comparison might be to think of why people would feel icky about eating labrador burgers.
I don’t think it’s a fair comparison with EAs taking holiday, or even how they maximise their altruism. It’s true that those actions might affect suffering/welfare more profoundly than their dietary choices, but they’re not literally ingesting the flesh of a dead animal which I think hits the ‘yuck’ reflex pretty hard.
Also, to clarify (even if what you’ve written is a joke), you’ve put together a pretty wobbly argument with an assumption in virtually each point that I don’t think is easy to substantiate.
In terms of satire, I’m not sure that satirising the choice to not eat animal products is the funniest topic. Another example might be people who don’t want to go to animal cagefighting matches because they think it’s cruel, although maybe in some weird scenario they could run a cagefight and earn money to allocate to effective charities in a way that outweighs the suffering cagefight. But that doesn’t make someone finding the cagefighting unpleasant, or their choices around it, a topic that’s funny to satirise.
Thanks for your comments—really great to learn more about this topic.
1. Agree, and optimisation in some soft-EA fields could have benefit
2. Agree—all other things equal, it would be better to work directly on an x-risk. But it would benefit EA, for the later reasons given, to acknowledge climate change as a potential stressor on x-risk. The assumption I’m using is that climate change might be more tractable for a bigger pool of people. And if someone is concerned about x-risk but is an expert on renewable energy on the breakthrough of some new technology, then they could understand their work as reducing the overall portfolio of x-risk. And if direct x-risks are heavily oversubscribed, or based on only a small number of agents (e.g. nuclear), then perhaps there’s more leverage for some people on climate change.
3. Agree that we should be focusing on things which affect the overall trajectory of civilisation. Is climate change really an intractable problem? In that case, why do so many smart people at all these universities, and the UN and IPCC have reducing emissions as a goal? Is it maybe intractable to assume that we’ll get to net zero, but is it a worthwhile goal to push to lower the rate of warming to give us more time to adapt?
I don’t profess to have the answer, but I’d be interested in the debate. I worry that this discussion doesn’t have enough input from real experts in this space.
If climate change is intractable, then what’s the next step? Should we be looking at geoengineering, adaptation, resilience? Assuming that climate change is intractable, then here are some other rough ideas of things that could help global welfare:
Early warning systems for floods in areas with anticipated rising sea levels
Research onto how societies should manage heat stress
Research and development of more resilient infrastructure, e.g. energy, food, and water—even if it’s just theoretically a question of pricing water etc. accurately, I’m not confident that in practice globally we’re doing that very well at the moment
4,5 - Interesting to read, I don’t profess to be an expert so would appreciate learning from other perspectives.
6,7,8- I wonder whether it’s possible that some modelling is overconfident on how resilient societies will be to climate change, as we’re densely networked and there might be lots of unanticipated secondary effects, such as mosquitoes affecting another billion people. The latest UK adaptation report acknowledges biodiversity as one of the several areas urgently needing further research.
Thanks all for your comments. A few friends have emailed me and made a couple of points about this post.
1. On the first-order effects of warming, the Stern Review figures are now 10 years out of date, and the IPCC SR1.5C expects worse impacts to welfare than previously stated, under all trajectories. See Chapter 5 of the report, and Byers et al. 2018.
2. A good source on the impact on GDP and societies through sea level rise is Pretis et al. (2018, Philosophical Transactions of the Royal Society)
3. The goal for climate change mitigation should be getting to net zero emissions as fast as possible, as anything other than that still causes warming, and this goal is absent from many EA and the 80,000 Hours write-up.
4. Absent from these discussions are climate economists, who would be able to help us grapple with this more concretely. Some suggested economists to research (and for the 80,000 Hours Podcast) are Adair Turner, Simon Dietz, Cameron Hepburn, and Joeri Rogelj.