Computer science student at UCL. Previously finance lead at CEA, Nov 2019 - Aug 2021.
Ben
Thanks for your comments—really great to learn more about this topic.
1. Agree, and optimisation in some soft-EA fields could have benefit
2. Agree—all other things equal, it would be better to work directly on an x-risk. But it would benefit EA, for the later reasons given, to acknowledge climate change as a potential stressor on x-risk. The assumption I’m using is that climate change might be more tractable for a bigger pool of people. And if someone is concerned about x-risk but is an expert on renewable energy on the breakthrough of some new technology, then they could understand their work as reducing the overall portfolio of x-risk. And if direct x-risks are heavily oversubscribed, or based on only a small number of agents (e.g. nuclear), then perhaps there’s more leverage for some people on climate change.
3. Agree that we should be focusing on things which affect the overall trajectory of civilisation. Is climate change really an intractable problem? In that case, why do so many smart people at all these universities, and the UN and IPCC have reducing emissions as a goal? Is it maybe intractable to assume that we’ll get to net zero, but is it a worthwhile goal to push to lower the rate of warming to give us more time to adapt?
I don’t profess to have the answer, but I’d be interested in the debate. I worry that this discussion doesn’t have enough input from real experts in this space.
If climate change is intractable, then what’s the next step? Should we be looking at geoengineering, adaptation, resilience? Assuming that climate change is intractable, then here are some other rough ideas of things that could help global welfare:
Early warning systems for floods in areas with anticipated rising sea levels
Research onto how societies should manage heat stress
Research and development of more resilient infrastructure, e.g. energy, food, and water—even if it’s just theoretically a question of pricing water etc. accurately, I’m not confident that in practice globally we’re doing that very well at the moment
4,5 - Interesting to read, I don’t profess to be an expert so would appreciate learning from other perspectives.
6,7,8- I wonder whether it’s possible that some modelling is overconfident on how resilient societies will be to climate change, as we’re densely networked and there might be lots of unanticipated secondary effects, such as mosquitoes affecting another billion people. The latest UK adaptation report acknowledges biodiversity as one of the several areas urgently needing further research.
TL;DR—maybe from an impact perspective your point makes sense, but I just find eating animals gross. (Also in B4 here comes a vegan)
My perspective (as someone who is vegan) who has learned about animal suffering is that the consumption of animal products, especially things like chicken breast, or other foods where you can see the bones and mechanics of it being from animal, makes the food very unappetising. Of course, the biggest impacts on animal welfare are through institutional change, wild animal suffering, and the long run future of life, but that doesn’t stop eating a piece of meat from just feeling pretty unpleasant. Given that pigs are of comparable intelligence to dogs, a good comparison might be to think of why people would feel icky about eating labrador burgers.
I don’t think it’s a fair comparison with EAs taking holiday, or even how they maximise their altruism. It’s true that those actions might affect suffering/welfare more profoundly than their dietary choices, but they’re not literally ingesting the flesh of a dead animal which I think hits the ‘yuck’ reflex pretty hard.
Also, to clarify (even if what you’ve written is a joke), you’ve put together a pretty wobbly argument with an assumption in virtually each point that I don’t think is easy to substantiate.
In terms of satire, I’m not sure that satirising the choice to not eat animal products is the funniest topic. Another example might be people who don’t want to go to animal cagefighting matches because they think it’s cruel, although maybe in some weird scenario they could run a cagefight and earn money to allocate to effective charities in a way that outweighs the suffering cagefight. But that doesn’t make someone finding the cagefighting unpleasant, or their choices around it, a topic that’s funny to satirise.
Thanks for your comments.
Do you also see it as something which should redirect resources currently being spent on long-termist causes?
No, I think that funding averted from AI alignment to climate change would be a mistake. But optimising money currently spent on climate change could be useful.
This is a small nitpick, but I don’t think I’ve ever seen the claim substantiated that EA’s focus has been unduly influenced by “the short-term world of hedge funds”, even though people make it all the time. Yes, GiveWell was founded by hedge-fund veterans, but the tools they borrowed from Bridgewater were (as far as I know) related to EV calculations, not “having a short time horizon”. EA has, almost since the beginning, had a stronger focus on the long-term future than nearly any other social movement.
Yes, it felt a little harsh of me to have written that. I agree—it’s a bit of strawman argument. I think what I was thinking there is perhaps better expressed in the quote from Christine Peterson.
Yes, the urgency point could indeed fall within the importance lens as you suggest. My concern was that some crude measures of importance didn’t consider this interactive effect in a dynamic world.
In Owen C-B’s ‘Prospecting for Gold’ talk, he briefly talks about urgency as part of tractability (something tractable now could be less tractable in the future).
How would effective altruism be different if we’re living in a simulation?
You might find this book chapter interesting - ‘Twenty-Seven Thoughts About Multiple Selves, Sustainable Consumption, and Human Evolution’ by Geoffrey Miller.
Hey there, interesting article! In this talk from the most recent EA Global, Niel Bowerman (climate physics PhD and now AI specialist at 80,000 Hours) gives some thoughts on the relationship between climate change and existential risk. Essentially I think that there’s some evidence about point 2 on your list.
In his talk, Niel argues that climate change could cause human extinction in itself, under some scenarios. These are quite unlikely, but have non-zero probabilities. When we consider that emissions are likely to increase well beyond 2100, beware the 2100 fallacy of cutting shorts impact analyses at an arbitrary point in time.
The larger contributions very roughly are probably from climate change contributing to social collapse and conflict, which themselves lead to existential risks. Toby Ord has called this an ‘existential risk factor’. I think the question isn’t “Is climate an existential risk?” but “Does climate change contribute to existential risk?” in which case, it seems that the sign might be yes. Or perhaps “Is climate change important in the long-term?” in which case, if we’re thinking across multiple centuries, even with lots of technological development, if we’re looking at >6C in 2300 (to pick an example), then I think the answer is yes.
All of this being said, I still think it’s a fair to argue that AI, bio, and nuclear are more neglected and tractable relative to climate change.
What do you think of Niel’s talk and this framing?
In 2019, I planned to donate 5% of my income. I used payroll donations in the following proportions of this 5% for three months: EA Funds Animal Welfare 5%, LTFF 35%, EA Meta 30%, ALLFED 25%, and due to reading this, CFRN 5%. Then I became more concerned about GCRs, and switched to 50% to GCRI and ALLFED, again through EA Funds.
I used my old company’s matching scheme to provide £500 (plus GiftAid) through EA funds to ALLFED, which was free of charge for me. I donated £100 to Climate Outreach when they had a week of matching. I’ve also previously donated £20/month to the Vegan Society, because of their public campaigns to increase the availability of plant-based food, but I stopped donating there so I could invest more in GCR reduction.
In the last few months of the year, I watched Phil’s talk about optimal philanthropy and decided that a. I was in an optimal stopping problem where I hadn’t explored enough options yet, and b. that there may well have been higher marginal benefits to future spending on x-risks. Since then, I’ve maintained a spreadsheet of my income (of which I’ve spent about 35%), and have invested the rest using this advice.
I tentatively plan to donate to long-term causes, but potentially not any time soon, once I’ve done more research on the most tax-efficient way to invest and donate. For 2020, my only outgoing donations so far have been to CATF and CFRN because of this talk on climate and x-risk, which I’m planning to write up in a forum post soon.
I think that sounds like a great idea. You could put forward a proposal on the EA forum, with a form for people to express interest, and share it to other places where the EA survey respondents expressed an interest. If the EA survey data is accurate, I’d expect you’d have a decent level of interest to get it running.
Interesting idea! It might be nice to embed the image, or maybe multiple images. If you don’t know how to do that, you can do that by uploading the image to imgur, writing a word like photo, selecting it then choosing the image icon. You can then resize the image by dragging it.
No worries—edit made.
How do you see climate change affecting the work of AMF? Do changes to water and temperatures mean that the strategy of bednets is still likely to produce similar results in the future as it has in the past?
Thanks, updated.
Yes, the way that worked in consulting for me was that the referral bonus was (very approximately) something like 10% of the hire’s salary. So if someone very senior got hired, you could maybe double your annual paycheck. Not sure that would be appropriate for FHI though...
Much of animal welfare initiatives seem to focus on farmed animals. Farmers are experiencing weather extremes, less-predictable seasons, wildfires, and flooding from climate change, which are likely to impact farmed animal welfare. If at all, how does this influence the strategy of the animal welfare movement?
In a blog from 2019, Kimberly Huynh from the GiveWell team mentioned they were intending to do further research on climate change mitigation. At present it seems to be that only Founders Pledge is doing this research. Is climate change something GiveWell is looking into more generally?
Hi Peter,
I’d like to make the eligibility criteria clear to any prospective applicants:
“The Paycheck Protection Program is a loan designed to provide a direct incentive for small businesses to keep their workers on the payroll.” (link)
The boards and directors of the business have to sign in good faith that “Current economic uncertainty makes this loan request necessary to support the ongoing operations of the Applicant” (link)
Providing misleading or incomplete information is a federal crime
This is an emergency support loan exclusively for businesses to retain workers they’d otherwise be forced to make redundant. At the moment, my interpretation of your summary is that you could make this point more prominent. At the moment, it’s only included in the required documents section.
The wording ‘make sure to mention that uncertainty of current economic conditions makes necessary the loan request’ I think could be misinterpreted as leading people to exaggerate this factor, though I appreciate this may not be your intention.
I think it would be safer to say that ‘this loan is exclusively available to businesses which are struggling to maintain their staff on the payroll and meet bill payments, and if this condition applies to your organisation, then please report this accurately in the documents you provide’.
Thanks, this is useful. You mentioned above that you’re planning to list more roles looking at biosecurity and climate change. What are 80K’s current thoughts and potential plans, if any, in relation to climate change?
Thanks for the reply Ben. It’s great to hear that you’re looking at producing more content and doing more engagement in this area!
Looking at the old cause selection list from 2017-18 (link) I notice that climate change was ranked #9, below factory farming and improving global health. From your more recent cause prio from 2019 (link), it seems that it’s now somewhere in the top seven. Can I ask what’s changed to cause 80K to update their cause prioritisation?
Thanks all for your comments. A few friends have emailed me and made a couple of points about this post.
1. On the first-order effects of warming, the Stern Review figures are now 10 years out of date, and the IPCC SR1.5C expects worse impacts to welfare than previously stated, under all trajectories. See Chapter 5 of the report, and Byers et al. 2018.
2. A good source on the impact on GDP and societies through sea level rise is Pretis et al. (2018, Philosophical Transactions of the Royal Society)
3. The goal for climate change mitigation should be getting to net zero emissions as fast as possible, as anything other than that still causes warming, and this goal is absent from many EA and the 80,000 Hours write-up.
4. Absent from these discussions are climate economists, who would be able to help us grapple with this more concretely. Some suggested economists to research (and for the 80,000 Hours Podcast) are Adair Turner, Simon Dietz, Cameron Hepburn, and Joeri Rogelj.