Climate Change & Longtermism: new book-length report
Since 2021, as part of the research for What We Owe the Future, I have been working on a report on climate change from a longtermist perspective. The report aims to provide the most complete treatment of that question yet produced. The executive summary is below and the full report is here, available at the What We Owe the Future supplementary materials webpage. I am grateful to the expert reviewers of the report for their comments. Views and mistakes are my own.
In this report, I will evaluate the scale of climate change from a longtermist point of view. Longtermism is the idea that influencing the long-term future, thousands of years into the future and beyond, is a key moral priority of our time.
In economics, longtermism is embodied by the idea that we should have a zero rate of ‘pure time preference’: we should not discount the welfare of future people merely because it is in the future. Economists who embrace a zero rate of pure time preference will tend to favour more aggressive climate policy than those who discount future benefits.
Climate change is a proof of concept of longtermism. Every time we drive, fly, or flick a light switch, each of us causes CO2 to be released into the atmosphere. This changes the amount of CO2 that is in the atmosphere for a very long time: unless we suck the CO2 out of the atmosphere ourselves, concentrations only fall back to natural levels after hundreds of thousands of years. The chart below shows long-term CO2 concentrations after different amounts of cumulative carbon emissions.
Some of the ecological effects of climate change get worse over time. The clearest example of this is sea level rise. On current policy, the most likely sea level rise this century is 75cm. However, over 10,000 years, sea levels will rise by 10 metres. Over the long-term, the world will look very different.
From a longtermist point of view, it is especially important to avoid outcomes that could have persistent and significant effects. These include events like human extinction, societal collapse, a permanent negative change in human values, or prolonged economic stagnation. If we go extinct, then that would be the end of the human story, and there would be no future generations at all. If civilisation collapses permanently, then future generations will be left much worse off than they could have been, living lives full of suffering rather than ones of flourishing.
The anatomy of climate risk
The overall size of climate risk depends on the following factors:
Greenhouse gas emissions
The climate change we get from different levels of emissions
The impacts of different levels of climate change
There is uncertainty about all three factors. The main findings of this report are as follows.
Emissions are likely to be lower than once thought
Due to recent progress on clean technology and climate policy, we look likely to avoid the worst-case emissions scenario, known in the literature as ‘RCP8.5’. The most likely scenario on current policy is now the medium-low emissions pathway known as ‘RCP4.5’. Moreover, climate policy is likely to strengthen in the future. For instance, as I was writing this report, the US Senate passed the Inflation Reduction Act, the most significant piece of climate legislation in American history.
Climate change is a great illustration of how society can make progress on a problem if enough people are motivated to solve it. This does not mean that climate change is solved, but there is significant momentum, and we are at least now moving in the right direction.
The amount of carbon we could burn in a worst-case scenario is also much lower than once thought. Some of the literature assumes that there are 5 or even 10 trillion tonnes of carbon remaining in fossil fuels, mostly in the form of coal. However, these estimates fail to recognise that not all fossil fuels resources are recoverable. Estimates of recoverable fossil fuels range from 1 to 3 trillion tonnes of carbon.
It is difficult to come up with plausible scenarios on which we burn all of the recoverable fossil fuels. Doing so would require (1) significant improvements in advanced coal extraction technology which is not part of the energy conversation today, but (2) a dramatic slowdown in progress in low carbon technologies that are already getting substantial policy support.
Warming is likely to be lower than once thought
Warming will likely be lower than once feared, in part because of lower emissions and in part because the scientific community has reduced uncertainty about climate sensitivity. Where once current policy seemed likely to imply 4ºC of warming above pre-industrial levels, now the most likely level of warming is around 2.7ºC, and the chance of 4ºC is around 5%. Moreover, where once there seemed to be a >10% chance of 6ºC on current policy, the risk now seems to be well below 1%.
On a worst-case scenario in which we burn all of the fossil fuels, the most likely level of warming is 7ºC, and there is a 1 in 6 chance of more than 9.5ºC.
Climate change will disproportionately harm the worst-off
The climate impacts literature suggests that climate change will impose disproportionate costs on countries at low latitude, which are disproportionately low- and middle-income and have done the least to contribute to climate change. People in Asia will have to deal with increasing flooding due to rising sea levels. Climate change will damage agricultural output, and cause droughts in countries reliant on rainfed agriculture. People in the tropics will face rising levels of heat stress. Fossil fuels also kill millions of people from air pollution in both poor and rich countries.
Many low- and middle-income countries have essentially never experienced sustained improvements in living standards, and a significant fraction may be left worse-off than today due to climate change. This undermines one common argument for discounting the future costs of climate change—that future generations will be richer and so better able to adapt to the effects of climate change.
We have a clear moral responsibility not to impose this harm, to reduce emissions, and to encourage economic development in poorer countries.
Average living standards will probably continue to rise
Climate-economy models confirm that the costs of climate change will fall disproportionately on poorer people, but almost all models also suggest that global average living standards in the future will be higher than today, on plausible levels of warming. Income per person looks set to increase by several hundred percent by the end of the century, notwithstanding the effects of climate change.
‘Bottom-up’ climate-economy models included in the IPCC’s Sixth Assessment Report that add up the effects of climate impacts in different sectors and plug them into modern economic models suggest that warming of 4ºC would do damage equivalent to reducing global GDP by around 5%. One recent model, Takakura et al (2019), includes the following impacts:
Heat-related excess mortality
Hydroelectric generation capacity
Thermal power generation capacity
For instance, in agriculture, the message from the climate impacts literature is that although climate change will damage food production, average food consumption per person will be higher than today, even for 4ºC of warming, due to progress in agricultural productivity and technology. This is illustrated on the chart below from van Dijk et al (2021), which shows per capita food consumption on different socioeconomic pathways.
I have previously been critical of climate-economy models, but now believe they are more reliable than they once were. Until recently, a key determinant of aggregate impact assessments was how to model the effects of >4.4ºC because the chance of that level of warming was so high. Estimates that models arrived at were unmotivated and arbitrary in part because the literature on the impacts of >4.4ºC was sparse. However, warming of >4.4ºC now seems increasingly unlikely (<1% given likely trends in policy), and there is a rich and voluminous literature on the impact of warming up to 4.4.ºC. This makes recent bottom-up models more reliable.
However, even the best bottom-up climate-economy models underestimate the costs of climate change because they do not account for some important direct costs:
They do not include tipping points
They do not explicitly model the potential effects of climate change on economic growth and technological progress
It is unclear how much these factors would increase the overall direct costs of climate change; that is an important area of future research for climate economics. However, for levels of warming that now seem plausible, these effects seem unlikely to be large enough to outweigh countervailing improvements in average living standards.
Bottom-up climate-economy models also do not account for indirect effects, such as conflict, which I discuss below.
‘Top-down’ climate-economy models try to directly measure the effects of climate change on aggregate economic output, and some of these find much higher impacts from climate change, on the order of a 25% reduction in GDP for 4ºC warming. However, these results are highly model-dependent, rely on questionable econometric assumptions, and exclude several important climate impacts. In my view, the best bottom-up studies are a more reliable guide, notwithstanding their flaws.
Although average living standards are likely to continue to rise, we also need to consider the possibility of societal collapse for other reasons, such as a pandemic or nuclear war. If there were to be a major global catastrophe, then future living standards may not actually be higher than today. Future generations trying to rebuild society would have to do so in a less hospitable climate.
Some tipping points could have very bad effects
In my view, the most concerning tipping points highlighted in the literature are rapid cloud feedbacks, collapse of the Atlantic Meridional Overturning Circulation and collapse of the West Antarctic Ice Sheet.
Some models suggest that if CO2 concentrations pass 1,200ppm (compared to 415ppm today), cloud feedbacks could cause 8ºC of additional warming over the course of years to decades, on top of the 5ºC we would already have experienced. The impacts of this sort of extreme warming have not been studied, but it seems plausible that hundreds of millions of people would die. Moreover, people would be stuck with an extreme greenhouse world for millennia. This would extend the ‘time of perils’: the period in which we have the technology to destroy ourselves, but lack the political institutions necessary to manage that technology. It would also make it much harder to recover from a civilisational collapse caused by something else (such as a pandemic or nuclear war). However, given progress on emissions, it is now difficult to come up with plausible scenarios on which CO2 concentrations rise to 1,200ppm.
Collapse of the Atlantic Meridional Overturning Circulation would cause cooling and drying around the North Atlantic, and more importantly would probably weaken the Indian monsoons and the West African monsoons, with potentially dire humanitarian implications. For 4ºC, models suggest that the chance of collapse is 1-5%, though they probably understate the risk.
There is deep uncertainty about potential sea level rise once warming passes 3ºC. For higher levels of warming, there is a risk of non-linear tipping points, such as collapse of the West Antarctic Ice Sheet, which would cause sea levels to rise by around 5 metres over 100 years, which would probably cause flooding of numerous highly populated cities, especially in Asia.
Due to progress on emissions, these tipping points now look less likely than they did ten years ago, but their expected costs (impact weighted by probability) may still be large. Furthermore, our understanding of the climate system is imperfect, and there may be other damaging tipping points that we do not yet know about.
All this being said, contra some prominent research, the evidence from models and the paleoclimate (the deep climate history of the Earth) suggests that it is not the case that, once warming passes 2ºC-4ºC, runaway feedback loops will kick in that make the world uninhabitable.
Direct impacts fall well short of human extinction
Given progress in emissions, the risk of human extinction from the direct effects of climate change now seems extremely small. The most plausible route to human extinction is via runaway feedback loops. However, models and evidence from the paleoclimate suggest that it is impossible to trigger such runaway effects with fossil fuel burning. Models suggest that we could only trigger a runaway greenhouse if CO2 concentrations pass 3,000ppm (at the very least), which is out of reach on revised estimates of recoverable fossil fuels.
Moreover, global average temperatures have been upwards of 17ºC higher several times in the past without triggering runaway feedback loops that killed all life on Earth. Indeed, since the Cretaceous, 145 million years ago, periods of high temperatures and/or rapid warming have not been associated with ecological disaster. However, prior to the Cretaceous, climate change was linked to ecological disaster. In the report, I discuss the theory that this was because of ecological and geographical factors unique to the pre-Cretaceous period.
I construct several models of the direct extinction risk from climate change but struggle to get the risk above 1 in 100,000 over all time.
One argument that climate change could directly cause civilisational collapse is that it could be a contributing factor (along with deforestation, human predation, and pollution) to ecosystem collapse, which could in turn cause the collapse of global agriculture. I argue in the main report that this risk is minimal.
Indirect risks are under-researched but now seem fairly low
Because interstate war has become increasingly rare since the end of World War II, most of the literature on climate change and conflict has focused on the connection between climate and civil conflict: conflicts between a government and its citizens in which more than 25 people are killed.
Scholars in the field agree that, so far, climate-related factors have been a much weaker driver of civil conflict than other factors such as socioeconomic development and state capacity. However, there is strong disagreement in the field about how important climate change will be in the future. It is widely agreed that the risk of climate-induced conflict is greatest in low- and middle-income countries, and that the most important mechanism is damage to agriculture.
The potential impact of climate change on the risk of interstate, rather than civil, war is potentially much more important but also much less studied. Among interstate conflicts, conflicts between the major powers pose by far the largest risk to humanity. This is because the major powers have far more destructive weaponry and have the capacity to alter the trajectory of humanity in other ways.
The most plausible way that climate change could affect the risk of interstate war is by causing agricultural disruption, which causes civil conflict, which in turn causes interstate conflict. Indeed, there is some evidence that countries embroiled in civil conflict are more likely to engage in military disputes with other countries.
It is difficult to see how climate change could be an important driver of some of the most potentially consequential conflicts this century—between the US and Russia, and the US and China. It is more plausible that climate change could play a larger role in driving conflict between India and Pakistan and also India and China. However, for plausible levels of warming, other drivers of this conflict seem much more important.
It is extremely difficult to provide reliable quantitative estimates of the risk of Great Power War caused by climate change. Nonetheless, I have built a model that attempts to put some numbers on the key considerations. I think this is valuable for several reasons. Firstly, it clarifies the cruxes of disagreements and allows focused discussion on those cruxes. Secondly, it allows us to prioritise different problems. If we do not quantify, we will still have judgments about how important different considerations are. Models make these considerations precise.
The downside of quantitative models is that they can cause false precision and anchor readers, even if the model is not good and has not been subject to scrutiny. Many of the considerations I have discussed are very difficult to quantify because there is essentially no literature on them.
With those caveats in my mind, my best guess estimate is that the indirect risk of existential catastrophe due to climate change is on the order of 1 in 100,000, and I struggle to get the risk above 1 in 1,000. Working directly on US-China, US-Russia, India-China, or India-Pakistan relations seems like a better way to reduce the risk of Great Power War than working on climate change.
My personal thoughts on prioritising climate change relative to other problems
My primary goal in this report is to help people to answer the following question:
If your goal is to make the greatest possible positive impact on the world, what should you do with your time and money right now, given how the rest of society is spending its resources?
Crucially, this question is about what people should do on the margin. It is about what people should do given how society allocates its resources, not about how society as a whole should allocate its resources. Thus, when I say that working on some other problems, such as nuclear war or biosecurity, will have greater impact, this doesn’t mean that society as a whole should spend nothing on climate change and everything on nuclear war and biosecurity. Rather, it is a claim about what we should do with our resources given how other resources are currently spent.
Moreover, the question I am trying to answer in this report is specifically about how to make the greatest possible impact on the world. This is the highest possible bar. In my view, climate change is one of the most important problems in the world, but other problems, including engineered viruses, advanced artificial intelligence and nuclear war, are more pressing on the margin because they are so neglected. One can visualise this in the following way. Green projects are beneficial on the margin, and red projects are harmful on the margin. Deeper green projects are more beneficial whereas deeper red projects are more harmful on the margin.
To emphasise, we should not confuse the claim that other problems are more pressing than climate change with the claim that climate change doesn’t matter at all. I am glad that climate change is a top priority for millions of young people and for many of the world’s smartest scientists, and I would like governments and the private sector to spend more on climate change. I helped to set up the Founders Pledge Climate Change Fund (donate here), which has helped to move millions of dollars to effective climate change charities. The point is that I would like other global catastrophic risks to receive comparable attention, not that I would like climate change to receive less than it does today.
Imagine that only a few hundred people in the world thought that climate change is an important problem (rather than at least tens of millions), that philanthropists worldwide spent a few million dollars a year on climate (rather than $10 billion), that society as a whole spent a million dollars on the problem (rather than $1 trillion), and that the international institutions trying to tackle the problem either don’t exist or have a similar budget to a McDonald’s restaurant. How bad would climate change be? This is how bad things are for the other global catastrophic risks, and then some.
The final important piece of context is as follows: although I am taking a longtermist perspective in this report, my conclusions about the priority of climate change relative to other global catastrophic risks are also true if you think only current generations matter. In my view, the risks from AI, biorisk and nuclear war this century are much higher than commonly recognised.
AI: Forecasters on the community forecasting platform Metaculus think that artificial intelligent systems that are better than humans at all relevant tasks will be created in 2042. The most sophisticated attempt to forecast transformative AI is by Ajeya Cotra, a researcher at the Open Philanthropy Project and her model now suggests that it is most likely to be developed in 2040. A 2017 survey of hundreds of leading AI researchers found that the median judgments implied that there is around a 4% chance of human extinction caused by AI before the end of the century.
Biorisk: Combined forecasts on Metaculus imply that the chance of synthetic biology killing more than 10% of the world population by 2100 is around 7%. The implied chance of synthetic biology killing more than 95% of the world population before 2100 is around 0.7%.
Nuclear war: Forecasters on the community forecasting platform Metaculus think that there is an 8% chance of thermonuclear war by 2070.
These risks are not speculative possibilities, and the case for working on them is not contingent on ignoring the suffering of the current generation for the sake of a tiny probability of techno-catastrophe. I think it highly likely that my daughter will have to live through nuclear war, pandemics created by engineered viruses, and/or the emergence of transformative AI systems that will radically alter society. It is deeply unfortunate that few people acknowledge these problems, and that many people who are aware of them dismiss them as sci-fi fantasies without attempting to engage with the arguments, or grappling with the fact that many people working in these fields agree that the risks are large.
Although, I contend, my conclusions follow on both neartermist and longtermist perspectives, it is important to reiterate that, in my view, a longtermist ethical point of view is the correct one. I see no compelling arguments for ignoring the welfare of future generations, and an ethical system that does ignore them is obviously difficult to square with concern about climate change.
While many people accept that the direct risks of climate change are lower than these other risks, some argue that the indirect effects of climate change may be large enough to make the total risk of climate change comparable. I do not think this is plausible. As discussed above, my rough models suggest that the total risk of climate change falls well short of the direct risk posed by the other global catastrophic risks. Moreover, the other risks also have indirect effects. As a rule, we should expect greater direct risks to have greater indirect effects. For instance, the indirect effects of trends in biotechnology seem to me much larger than the indirect effects of climate change. If biotechnology does democratise the creation of weapons of mass destruction, the indirect effects for the global economy and geopolitics are hard to fathom but seem enormous.
Overall, because other global catastrophic risks are so much more neglected than climate change, I think they are more pressing to work on, on the margin. Nonetheless, climate change remains one of the most important problems from a longtermist perspective. If progress stalls and emissions are much higher than we expect, then there is a non-negligible chance of highly damaging tipping points. Moreover, climate change is a stressor of political upheaval and conflict, which can in turn increase other global catastrophic risks. Finally, extreme climate change would make recovery from civilisational collapse more difficult.
- Doing EA Better by 17 Jan 2023 20:09 UTC; 249 points) (
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22’) by 30 Aug 2022 1:37 UTC; 144 points) (
- New blog: Some doubts about effective altruism by 20 Dec 2022 19:34 UTC; 123 points) (
- Beyond Simple Existential Risk: Survival in a Complex Interconnected World by 21 Nov 2022 14:35 UTC; 83 points) (
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22′) by 30 Aug 2022 1:42 UTC; 57 points) (LessWrong;
- More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios by 29 Apr 2023 8:24 UTC; 42 points) (
- Longterm cost-effectiveness of Founders Pledge’s Climate Change Fund by 14 Sep 2022 15:11 UTC; 35 points) (
- Podcast: Questions for John Halstead by 13 Sep 2022 11:07 UTC; 32 points) (
- Future Matters #5: supervolcanoes, AI takeover, and What We Owe the Future by 14 Sep 2022 13:02 UTC; 31 points) (
- Giving Green’s 2022 year-end updates by 1 Dec 2022 18:20 UTC; 30 points) (
- Longtermists should take climate change very seriously by 3 Oct 2022 18:33 UTC; 29 points) (
- 21 Jan 2023 21:05 UTC; 26 points)'s comment on Doing EA Better by (
- Doing Better on Climate Change by 7 Oct 2022 17:22 UTC; 23 points) (
- 14 Nov 2022 6:46 UTC; 19 points)'s comment on My reaction to FTX: appalled by (
- 18 Jan 2023 20:43 UTC; 18 points)'s comment on Doing EA Better by (
- Please Do Fight the Hypothetical by 29 Aug 2022 8:35 UTC; 18 points) (LessWrong;
- Monthly Overload of EA—September 2022 by 1 Sep 2022 13:43 UTC; 15 points) (
- Defective Altruism article in Current Affairs Magazine by 22 Sep 2022 13:27 UTC; 13 points) (
- How much does climate change & the decline of liberal democracy indirectly increase the probability of an x-risk? by 1 Sep 2022 18:33 UTC; 7 points) (
- 5 Oct 2022 15:12 UTC; 7 points)'s comment on rodeo_flagellum’s Shortform by (
- 25 Feb 2023 13:05 UTC; 4 points)'s comment on Replace Neglectedness by (
- 13 Nov 2022 18:09 UTC; 2 points)'s comment on My reaction to FTX: appalled by (
- 7 Oct 2022 20:20 UTC; 1 point)'s comment on Doing Better on Climate Change by (
Here are my high-level thoughts around the comments so far of this report:
This is a detailed report, where a lot of work has been put in, by one of EA’s foremost scholars on the intersection of climate change and other global priorities.
So it’d potentially be quite valuable for people with either substantial domain expertise or solid generalist judgement to weigh in here on object-level issues, critiques, and cruxes, to help collective decision-making.
Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
The 0.5 in question is this comment by Karthik.
(EDIT: I think Noah’s comment here qualifies)
Compare, for example, the following comments to one of RP’s cultured meat reports.
After saying that, I will hypocritically continue to follow the streak of being meta while not having read the full report.
I think I’m confused about the quality of the review process so far. Both the number and quality of the reviewers John contacted for this book seemed high. However, I couldn’t figure out what the methodology for seeking reviews is here.
To be clear, I’m aware that this is an isolated demand for rigor. My impression is that very few other EA research organizations have a very formal and legible process for prepublication reviews.
However, I think for a report of this scope, it might be valuable to have a fairly good researcher sit down and review the report very carefully, in a lot of detail.
If this is not already done, for people who are not satisfied (or at least queasy) with the report, it might be helpful to commission 1-3 months of research critiquing and redteaming this report in a lot of detail.
I think John is understandably frustrated by the lack of technical engagement and the overly metaness (and possibly personal acrimoniousness? ) of the comments so far.
I think John has acted badly in accusing A.C.Skraeling of being a sock puppet account.
I think every loosely organized emerging research field has “duds,” or low-quality work that in a poorly filtered ecosystem will suck up a lot of the oxygen.
For both politeness and practical time-constraints reasons, people tend to ignore the bad work and only engage with the good/mediocre work.
Larks is one of the few people who point out this explicitly (for AI alignment). I think his summary of the situation (including why people do this, and downsides) is a worthwhile read, will recommend!
My impression is that academia and other relatively formal systems deal with this via more explicit notions of prestige, conference/paper rejections, etc.
However, this comes with its own dysfunctions and risks ossification.
Taking a step back, I’m a bit confused about why longtermist researchers continue to spend such significant intellectual resources on studying climate in this fashion. My guess is that anyone who was convinced by earlier research by John, 80k, etc, plus the relevant research and communication into other, riskier, domains, are unlikely to change their minds by critiques of this deeper report. And anybody who wasn’t convinced by the earlier reports are unlikely to be convinced by this one.
So I don’t really know the value-add of more intellectual attention into this.
I looked into climate change for myself for several weeks in 2019 (back when I was substantially worse at both research and judgement). I think I was fairly satisfied that it should not be one of EA’s top priorities, even ignoring neglectedness. My reasoning then was fairly simple.
My main reason for thinking that climate change was a significant existential risk (or at least a GCR) was that I had the vague impression that this is a position in academia.
My main reason for not believing that was because a) weak inside views about historical resilience to temperature variability and b) other smart and thoughtful EAs didn’t believe this.
Thus, the appropriate way to evaluate the evidence is to try to look at the climate change literature, in a way that does not reference past EA work (so as to not be clouded as much by biases/information cascades).
I looked at it for a while, and concluded that the literature mostly did not say that climate change was a significant existential risk (of course it was not framed in those terms).
I thought that climate risk experts should be predisposed to overestimate rather than estimate risks, so there is additional evidence beyond just their conclusions.
Given that my reasons for thinking climate change was a significant existential risk weren’t very strong or robust to begin with, I didn’t need that many bits to be convinced that the risks are low.
I think my research quality/rigor at the time was pretty meh. Still I don’t think it should swing the overall conclusion.
I’m more worried about motivated reasoning/selection effects. Motivated reasoning because I felt like part of the reason for doing that research was convincing myself and others, rather than because I fully impassionately wanted to know the truth. Selection effects because I went into it using an EA methodology and ways of thinking about the world, and of course I’m selected to be one of the people who think in that way.
Since then, I had 3 updates re: how high climate risk should intuitively “feel”, in my head, in different directions:
Jan-March 2020 Covid was a large update for me against a) trusting the object-level of a lot of expert modeling and b) trusting that experts will uniformly overestimate rather than underestimate risk (as opposed to “don’t panic” justifications and conservatism bias). This caused me to worry more about climate stuff.
John/Johannes looked into and updated in believing in lower overall climate risk, with reasonable-sounding object-level reasons. This caused me to worry less about climate stuff.
I did not look into or investigate their reasoning, and generally have not prioritized looking into climate much since 2019.
Compared to myself in 2019, I placed relatively more trust in the relative judgement of EAs over that of external people. This caused me to worry less about climate stuff.
I think the first update is the largest.
My understanding is that a lot of the purported value of engaging on the object-level details of climate, and of WWOTF’s aims in general, is to draw in the climate change people as allies.
I’m a bit confused about this reasoning. I think:
the arguments for working on AI risk/biorisk over climate risks seems fairly simple to me, and resilient to a practical range of empirical disagreements within climate.
to the extent the arguments are bad or poorly communicated, I strongly suspect that they come in almost entirely because AI risk/biorisk arguments are insufficiently grounded, rather than because the climate arguments are insufficiently sophisticated.
from a persuasion perspective, I think this is maybe a bad move anyway. Maybe better to be “yes, and” than to explicitly and immediately tell people that the thing they’ve devoted their lives to is >100x less important.
Given that it might not be realistic to find (and may not be very valuable) a senior-ish EA researcher to critique and red-team John’s report, I’d be excited to have several junior EAs try this.
Roughly, a summer-long project for someone of the calibre of CERI/SERI might be to go line by line in the report (or pick a few chapters) to find the relevant potential mistakes/disagreements/cruxes (from an LT perspective).
This helps train critical thinking abilities and thoughtfulness.
To be clear, this is premised on domain-general critical thinking abilities and thoughtfulness being among the most valuable things for junior EAs to train.
I’m genuinely confused about how much we want less thoughtful people as allies. I think an increasingly common position in longtermist EA nowadays is that the things that matter is AI risk and biorisk (sometimes just AI risk), and what we most/only need are people who are either extremely technically competent (for solving the technical problems) or who are very good at politics (for policy/persuasion).
Whereas I feel like I sort of represent the older guard of people who still think that thoughtfulness is a huge asset.
But I acknowledge that this is something I have a (large?) bias to believing is true (because I’m relatively worse at being very technical or social, compared to being thoughtful).
I think this is mainly because of the length of the report which makes it hard to make meaningful critiques without investing a bunch of time
Yes, and I note that as/after I wrote my comment, there are more thoughtful object-level comments. So perhaps I commented too early and should’ve just waited for people to have time to read the report first and then provide object-level comments!
Not sure why this was downvoted. I really appreciate comments where people publicly acknowledge they may have made an error and update their views.
I think this is a good place to have discussions about claims in specific sections (rather than the whole report) if people would like
I don’t have time in the next several days to give your write-up the attention it deserves, but I hope to study it as a learning opportunity and to expand my grasp of general arguments around what I call steady-state climate change, that is, climate change without much contribution from tipping points this century and without strong impacts at even higher temperatures (eg., 3-4C). I appreciate the structure of your report, by the way, it lets a reader quickly drill down to sections of interest. It is clearly written.
At the moment, I am considering your analysis of permafrost and methane contributions to GAST changes. I have a larger number for total carbon in permafrost than you, 1.5Tt carbon, but now have to go through references to reconcile that number with yours. Your mention of an analysis from USGS deserves a read through articles from the reference you gave, and I am attempting that now.
There are several parameters involved (only some independent), to do with:
source type (anearobic decomposition, free gas deposit, methane hydrate dissolution),
source depth and layering,
rate of release (obviously dependent on other parameters)
geographic location (gulf of mexico versus arctic ice shelf),
temperature gradient (at a location),
water column height (near-shore vs slope),
in deciding whether methane (edit:carbon in methane) reaches the atmosphere as methane, carbon dioxide, or not at all, and over what time period.
The significance of field observations over the last 10 years, and differences between particular regions (eg, Arctic seas), should be taken into account. Before reviewing counterarguments, I tend to take specialist claims about conclusions that factor in these parameters uncritically, but now that you’ve mentioned one parameter, aerobic bacteria acting on methane, as implying conclusions contrary to mine, I should delve deeper into how these parameters interact.
If you want to offer a comment about Greenland’s ice sheet, and its potential contribution to sea level rise this century, I am curious to check sources with you and do more reconciling (or at least partitioning) of references. I’ve seen reports that changes to Greenland’s ice sheet are accelerating and lead to estimates of sea level rise that are higher than, say 50 cm, more like meters, actually, over the next 80 years, but would like to know more from you.
In general, my observation is strong drivers of change to specific tipping points haven’t found their way into climate models used by the IPCC (for example, physical processes driving some Greenland ice melt). They might at some point.
BTW, I did take a read through the comments here, and consider the the mentions of analyses of systemic and cascading risks to be useful. I hope you won’t object if I ask a few questions about those risks, just to understand your perspective on those models. However, if you consider those questions to be out of scope or not of interest, let me know, and I’ll hold off.
(Just noting that I’m not ignoring your comments about methane clathrates, but I don’t think you were asking for a response there, but were instead just highlighting some issues for you to look into? Correct me if I’m wrong)
Yes I note that there is deep uncertainty about sea level rise once warming passes 3ºC and that sea level rise might be much higher than estimated. I discuss the impacts this might have in the sea level rise section and the economic costs section
I agree that many specific tipping points haven’t made their way into IPCC models
In your research report, you wrote:
and you include the footnote:
In the Nature paper you cited for a listing of permafrost carbon, you find the following quote on the same page as lists total carbon in the top 3 meters of permafrost. I list the geographic regions in braces for clarity:
So a total amount of carbon in permafrost between 1730-1980 Pg, or 1.73-1.98 trillion tonnes of carbon, not the 1 trillion tonnes you list. This is typically described as being twice the carbon currently in the atmosphere, but how quickly it causes heating given some rate of thaw depends on whether it is released as methane or carbon dioxide. As you know, methane has 100X the heating potential of carbon dioxide, but that drops off rapidly over a couple decades, so rate of release is very important.
If you look elsewhere for amounts, you find the usual figure listed is 1.5Tt for total carbon in permafrost. I think that represents updates to estimates but have not looked into it in detail. A slight rephrase of your sentence “About 1 trillion tonnes of carbon is stored in permafrost.” to either mention the top 3 meters of soil explicitly for the trillion tonnes number or to use some figure closer to 1.5Tt (NOAA’s mid-range for total northern permafrost). will bring the announced total closer to what people typically mean by total carbon in permafrost, just the permafrost in the North.
Earlier in the same Nature paper, you read:
For the reference you cite, it’s clear that carbon deeper than 3m is considered “susceptible to future thaw”, and so is relevant to discussions of permafrost contribution to global warming. In fact, some existing examples of that thaw are mentioned in that 2015 paper.
I think there are numerous taking off points for discussion of the effects of permafrost thaw aside from the box 5.1 sections of the IPCC technical report that you cite.
The parameters deciding the effects of permafrost include:
abrupt vs gradual release of carbon
carbon release as methane (CH4) vs carbon dioxide (CO2)
geological shaping of ice and organic matter within permafrost
release of ancient microbes (bacteria, viruses) and others (anthrax, smallpox) in the soil
subsidence rates (causing effects on current or future infrastructure)
biophysical rates of change to permafrost (ground fires and microbial action)
stored chemical release on permafrost lands (sump chemicals, other chemicals)
if you decide that you want to expand on those in a later version of your research.
EDIT: I included some light edits to this to make my comment more clear. Also I would love to discuss more of the topics you raise in your research report, including the models suggesting different levels of contribution from carbon release from permafrost.
I noticed this quote at the end of the box highlight on “Permafrost Carbon and Feedbacks to Climate” in Chapter 5 of the IPCC Technical Report that you cite:
And this is why considering the highs and lows in a bit of depth is worth doing.
Those projections, if we accept them as accurate, do not address nonlinear release of carbon, particularly as Methane. That leaves it to you to summarize expected heating over the short term of an abrupt release of carbon involving significant amounts of CH4. As I wrote, CH4 has 100X the heating potential of CO2 but over the longer term of a century, that drops to 25X.
Would abrupt release of a large amount of CH4 create a jump in average temperature of 1-2C? What would the impact of that be? How would it amplify other feedbacks and with what consequences for humanity, give that the effect is temporary?
Your report could address those questions with more interest than it does.
Right, I wasn’t looking for a response about methane, more just excitedly listing, I guess. My motivated thinking, going in, is that there’s plenty of exposed methane hydrates and free methane on shallow parts of the continental shelves exposed to much warmer waters in the Arctic and Siberia. A Nature paper from Ruppel is a bit old, and includes discussion of deeper deposits in warmer waters much further south. The paper does make exceptions for shallower deposits, as in the Arctic sea. She notes technical difficulties in resolving the origin of the methane even in those cases, but there’s been efforts to resolve the questions since then. A later Reviews Of Geophysics paper confronts predictions about sources and distributions.I have to dig into that.
Carolyn Ruppel is also a proponent of drilling undersea methane for fuel, and has been for the last decade. Treatment of the melting arctic as a tipping point seems politically unpopular, now that various projected benefits of its melt have been identified. We can drill for natural gas or oil, fish, establish shipping lanes, or fight over sovereignty up there, but I’m not seeing much government attention on the ice-free Arctic as an actual climate problem.
Still, Ruppel holds an important position, and I will give her research more attention now. Thank you.
Yes, as far as sea level rise, I read the sections you mentioned, thank you. The West Antarctic is less of an immediate concern than Greenland, so I am puzzled why you haven’t mentioned Greenland explicitly. Your discussion of sea level rise doesn’t include Greenland’s contribution, but Greenland will melt before the West Antarctic, and it holds several meters of sea level rise in its ice. I believe that Greenland’s melt could shutdown the AMOC as well.
I think processes like fires on permafrost land go ignored in models of permafrost thaw, just like lubrication of the bottom of Greenland Ice goes ignored. Some discussions about climate change suggest that people move north, but north into areas of melting permafrost? That seems dubious.
Anyway, thanks again, I’ll come back to you with whatever I actually conclude once I compare the two points of view that I have on arctic methane:
dangerous tipping point
harmless, possibly irrelevant, source of natural gas
Usually I would’ve given a regular upvote, but I think this should be highlighted above the meta comments and flame wars.
Thanks, I’m actually surprised that members of the community have such energy around its concerns about the quality of climate change scholarship. I didn’t expect that that the OP would generate these concerns.
I posted a radical opinion about climate change here some time back that got a few downvotes and almost no readers. Basically, I think global warming is now self-amplifying. Anyway, I don’t mind the lack of interest, it wasn’t scholarly work.
The meta comments are about research process, how best to represent differing viewpoints, and whether John gave fair weight to considerations outside the point of view that John holds. I don’t have a comment here, I think I’ll take what was given as where to start my own learning efforts.
What I would like from others who post here is more engagement around specific scenarios of risk. From my review of comments made in discussions of climate change, the obstacle seems to be lack of commitment to the plausibility of specific scenarios.
So for example, a discussion of a multi-breadbasket failure would include a few sentences about how our civilization would respond by choosing to grow its own food, eg., in cities. I would like to see someone work that through. We’re talking about locally producing calorie-dense sources of carbohydrates and proteins in a situation in which grain stocks become limited worldwide. Vegies on your windowsill won’t do the job. More generally, there’s a question about stocks vs flows, we have some grain reserves, but how much, and how should they be managed in case of what percentage of global crop failures? George Monbiot has some conclusions (hint, he wants to use some old NASA tech), I’m looking forward to reading his work. Then there’s the reason for failure. Hurricanes inundating low-lying farm areas (like Vietnam) would have a longer impact on soil productivity than would a 6-week heat wave, or would it?
Another example would be how we handle internal and global migration, given some specific scenario. For example, a famine and water shortage in Bangladesh during a heat wave inducing power outages and heat stress enough to kill people. What does an altruistic response to that situation look like?
In any prediction where the claim goes, “It would be very bad if....”, there’s usually a discussion of 100′s of millions of deaths. What does that mean? Do they die in place, suddenly, or was there a predictable build-up but no response for a long time? I see this happening with a multi-breadbasket failure, there’s hardly anyone working to prevent this scenario in realtime. There’s one UN organization, a small one, and then there’s ALLfed, who seem to be focused on Nuclear Winter. And how long is realtime? There’s supposed to be a network coordinated from the UN that tracks when a global famine is looming and arranges stocks and flows to prevent the worst of it. Are they funded and effective?
I have also noticed a lack of interest in the states about the impacts of small heat waves elsewhere, for example, China’s recent heat wave. But these tiny examples are a good start for creating predictions. People fried eggs on city stones for fun over there (yes, the heat island effect). Bridges buckled from heat, and their hydroelectric dams are dry. The immediate predictions, sadly, are just focused on their GDP. There’s not much good prediction work to explain what happens if their water and hydropower shortage continue. I know only a little, that different geographic regions have different levels of dependency on hydropower. But the industries effected are critical to some sections of the global economy, and their shutdown, long-term, should be a concern for the economics-minded. Will they turn to coal to make up the difference and does it matter (coal, btw, has an interesting silver lining, the aerosol effect)?
Then there’s Britain’s recent heat wave, and the heat wave in the Northwest of the US a few years ago. All quite odd, linked to the meandering jet stream. The rain on Greenland’s summit, another anomaly, which sets a new precedent for Greenland melt. We don’t want an atmospheric river dumping on Greenland for days on end.
Then there’s the recent prediction of a historical flood of California, a recurring event, but worsened by climate change.
Yes, so there’s plenty to start from, and when building a scenario, you can take what’s happened, make it worse, and make it last longer or occur repeatedly. Use that to explain why climate change is bad. Conversely, to reject climate change as a catastrophic risk, explain how we go about handling these difficult situations successfully, in time to prevent risks like the deaths of 100′s of millions of people. I would like to read some of those rejections.
Thanks for mentioning ALLFED. We do tend to focus on nuclear winter, including with NASA tech like hydrogen single cell protein. However, a lot of the foods we research are relevant to climate catastrophes such as multiple breadbasket failure, including seaweed.
Yes, the protein production technology is certainly relevant. I don’t think the seaweed is unless you are confident that it would survive various changes in ocean temperature, acidity, pollutant levels, flora, and fauna that progress with climate change. What do your models say?
We have not modeled seaweed growth in a warming world, but I believe others have. I expect that species would need to move to higher latitudes, as they would need to move to lower latitudes in the case of nuclear winter.
What geographic range would growth of the seaweed serve depending on what forms of food transport? Is the use of seaweed as a food source likely restricted to the coasts and coastal populations?
Seaweed can be dried and transported long distances. It can also be used for reducing climate change, including sequestering CO2 and reducing methane emissions of cattle.
Can it be grown in tanks? I think fallout from a nuclear war would contaminate all open areas used for agriculture, including the oceans, for example, from fallout on winds, dust, rain (if there is any?), or water contamination carried on ocean currents. Do your models suggest that agriculture and aquaculture and use of open areas is a strong contamination risk or no?
In the case of climate change, the major shifts are in pH, local heat, currents, and ecology. I suspect strong climate change will require tank growth of seaweed, if any. There are global models of ocean pH change. I think pH is lower at the poles while absolute water temps near the coasts will be higher at the equator.
There was an algae-based oil called Thrive, totally monounsaturated, if I remember right, that until recently was commercially available. I used it several times and liked it as a salad oil.
Seaweed can be grown in tanks, and so can microalgae. But from what I’ve seen, the cost is significantly higher in tanks. Radioactive contamination is a concern, especially in target countries. But it is likely not the most important concern, as Hiroshima was continuously inhabited. Radioactive contamination would be diluted in the oceans, so I think seaweed would be better than land crops in this regard.
Hi, Dr. Denkenberger
Thanks! I appreciate your thoughts. I have a few more questions:
1. If you can find the research about seaweed growth in lower-pH conditions with heat waves in nearshore waters, and changes in nutrient availability (probably declines), I want to know more. I think seaweed might be a good near-term choice of replacement agriculture in the next 10-20 years, but during that time, it makes sense that the world scale up the kinds of food sources that you and ALLFED explore.
2. I like dextrose monohydrate, as a food product, it’s widely available and dissolves clean in water. With flavoring and in combination with whey (and of course casein, but I really favor whey), it makes a replacement milk. I understand that anhydrous dextrose has different properties in foods. What form of dextrose would paper mills produce? Are you more thinking something with less sweetness, like maltodextrin (also a possibility in a milk substitute)? Could the mills produce different types of carbs?
3. Assuming a 2400 kcal diet, what are your targets for macronutrients? Given a source of concentrated carbohydrates, people need a protein source, a fat source, and additional sources of minerals and vitamins and other compounds. I lik carbs (510g-450g), proteins (40g-100g), and an EFA source (1g-10g), but that’s just me. Adding in fats, you need to choose a carb minimum, as I think the trade-off would be carbs for fats, not proteins for fats.
There’s a variety of reasons to choose different kcalorie totals and macronutrient balances, do you have a list of your criteria and final decisions or have you looked into that in detail?
4. Have you looked into the manufacture of:
* individual essential amino acids?
* essential fatty acids?
* vitamin and mineral supplements?
5. Based on UN studies, there’s a lower limit on protein consumption that maintains protein balance in a person. Has ALLFED chosen a minimum daily human EAA requirements, per kg bodyweight, and something similar for children?
6. With the dried seaweed you mentioned, how do you prepare it, or what sort of food products can you prepare with it? With dextrose, the easiest choices are sweet treats. What do you do with the seaweed?
7. I suspect that in a time of crisis like a multi-breadbasket failure, both refrigeration and heating (cooking) are lacking resources for transport and storage. Therefore, ability to store food for long periods without spoiling is important. Dried foods or powders work the best there. If it were me, I’d choose carb+ protein powders and vacuum-sealed EFA plus vitamin/mineral supplementation powder. How does your modeling and knowledge differ from my conclusions?
Given different food sources of proteins, and differences in absorption from those sources, as well as balance of aminos present in those foods, people require more or less food to meet their EAA requirements. This is actually an important argument against the use of natural food vegan protein sources available globally, because although total protein requirements are easily met by local food sources, EAA requirements are much to meet without the addition of milk or meat, unless you rely on soy. I don’t object to soy in the diet, but in terms of environmental footprint required to meet human EAA requirements, vegan diets might be a concern if they don’t include soy. Of course you know that individual EAAs cannot be substituted for each other.
I think supplementation of manufactured foods with aminos would serve for countries with less access to milk or meat. So EAA’s to bring foods into balance with ideal EAA profiles, and individual amino acids like glutamine that have higher metabolic demand. Ajinomoto corporation does use aminos as a food additive and animal feed suppliers do this with animal feed but most amino acids taste terrible, except for lysine, glutamine, and maybe a few others. Some people like glycine but I do not like the taste.
This seems to overstate how bad the situation is (although qualitatively it remains an absurd underinvestment, with painfully low-hanging fruit to avert pandemics and AI catastrophe at hand). Surveys of the general public and area experts do show substantial percentages in the abstract endorse nuclear (in particular), bioweapon, and AI risks as important problems (as you mention later). Governments wage wars and spend very large amounts of attention and resources on nuclear proliferation and threats. Biodefense has seen billions of dollars of spending, even if it was not well-crafted to reduce catastrophic bioweapon risk. The low budget for the BWC is in significant part a political coordination problem and not simply a $ supply issue. Annual spending from Open Philanthropy and the Future Fund on catastrophic risks, with priorities close to yours, is now in the hundreds of millions of dollars.
These are good points, I will amend
Didn’t separate karma for helpfulness and disagreement (frequently used on LessWrong) get implemented on the EA forum recently? This post feels like the ideal use case for it:
There are some controversial comments with weakly positive karma despite lots of votes, where I suspect what’s going on is some people are signalling disagreement with downvotes, and others are signalling ‘this post constitutes meaningful engagement’ with upvotes.
There are also some comments where the tone seems to me to be over the line, with varying amounts of karma (from very positive to very negative), from various people.
Were a two-karma system available, I think I would use both [strong upvote, strong disagree] and [strong downvote, strong agree] at least once each.
I notice a two-karma system has been implemented in at least one EA Forum post before, see the comments section to this “Fanatical EAs should support very weird projects” post.
the forum did offer the chance of having agree/disagree on the post, I just forgot to respond. I think it is a beta feature but happy for it to be used on this post
I think we also need renewed discussion of how the karma system contributes to groupthink and hierarchy, things that, to put it gently, EA sometimes struggles with somewhat.
As far as I can tell, the system gives far more voting power to highly-rated users, allowing a few highly active (and thus most likely highly orthodox) forum users to unilaterally boost or tank any given post.
This is especially bad when you consider that low-karma comments are hidden, allowing prominent figures (often with high karma scores) to soft-censor their own critics.
This is especially worrying given the groupthink that emerges on internet fora, where a comment having a score of −5 makes it much more likely for people to downvote it further on reflex, and vice versa.
I am not going to go into details here beyond saying that this is the plot of the MeowMeowBeenz episode of Community.
MeowMeowBeenz does not contribute to good epistemics.
I disagree that the problem here is groupthink, and I think if you look at highly rated posts, you can’t reasonably conclude that people who criticise the orthodox position will be reliably downvoted. I think the problem here is that some people vote based on tone and some on content, which means that when something is downvoted different people draw different conclusions about why.
I hope to encourage more people to instead upvote based on rigor/epistemics/quality on the margin, rather than based on tone or based on agreement (which is some of “content”) or vibe.
EDIT: I also think a surprisingly high number of people upvote low-quality criticisms that have a good tone, which makes me surprised when others assert than the movement is systematically biased against criticisms (“insufficient discernment” will be a fairer criticism, but that’s a mistake, not a bias).
Doubtful if you look at Gideon’s first comment and remember it was downvoted through the floor almost immediately.
Questioning orthodoxy is ok within some bounds (often technical/narrow disagreements), or when expressed in suitable terms, e.g.
(Significant) underconfidence, regardless of expertise and/or lack of expertise among those criticised
Unreasonable assumptions of good faith, even in the face of hostility or malpractice (double standards, perhaps a lesser form of the expectation of a ‘perfect victim’)
Extensive use of EA buzzwords
Huge amounts of extra work/detail that would not be deemed necessary for non-critical writing
Essentially making oneself as small as possible so as not to set off the Bad Tone hair-trigger
This is difficult because knowing what you are talking about and being lazily dismissed by people you know for a fact know far less than you about a given subject matter makes one somewhat frustrated
As several EAs have noted, e.g. weeatquince, this is time-consuming and (emotionally) exhausting, and often results in dismissal anyway.
This is even harder to pull off when questioning sensitive issues like politics, funding ethics, foundational intellectual issues (e.g. the ways in which the TUA uses utterly unsuitable tools for its subject matter due to a lack of outside reading), competence of prominent figures, etc.
I actually think this forms a sort of positive feedback loop, where EAs become increasingly orthodox (and confident in that orthodoxy) due to perceived lack of substantive critiques, which makes making those critiques so frustrating, time-consuming, and low-impact that people just don’t bother. I’ve certainly done it.
Quantitatively, if you look at the top 10 most upvoted posts:
4 are straightforwardly criticisms: (“Free-spending EA might be bad...”, “Bad Omens”, “case against randomista development”, “Critiques of EA”)
4 are partial criticisms (“Long-Termism” vs. “Existential Risk”, “My mistakes on the path to impact”,”EA for dumb people?”, “Are you really in a race?”)
1 (the most upvoted) was a response to criticism (“EA and the current funding situation”)
1 was about the former EAForum head leaving (“Announcing my retirement”)
This is a total of 40-80%, depending on how you count.
(In the next 10 posts, I “only” see 3 posts that are criticisms, but I don’t think that 30% is particularly low either. It does get lower further down however).
I think this is a non-sequitur in response to A.C.Skraeling’s comment. They said:
A high percentage of the most upvoted posts of all time being criticism of some sort is perfectly compatible with this.
Here’s a recent case of someone questioning orthodoxy (writing a negative review of WWOTF), not bothering to express it in EA-friendly enough language, and subsequently being downvoted to a trollish level (-12) for it despite their content being much better than that: https://forum.effectivealtruism.org/posts/AyPTZLTwm5hN2Kfcb/book-review-what-we-owe-the-future-erik-hoel
I don’t find this example convincing. I just read the review and found it pretty underwhelming. Take this:
The paragraph is reacting to the following passage in WWOTF:
But MacAskill is here describing problems with rival views in population ethics, not problems with utilitarianism! Because the author (1) conflates the two, (2) mischaracterizes the Repugnant Conclusion (“worlds where all available land is turned into places worse than the worst slums of Bangladesh”), and (3) fails to distinguish the Repugnant Conclusion from standard “repugnant” implications of utilitarianism that have nothing to do with it, he ends up attributing to longtermism a number of “ridiculous” views that do not in fact follow from that position.
Separately, if criticizing WWOTF is considered to be a paradigmatic case of “heterodoxy”, it seems worth mentioning that a recent critical review by Magnus Vinding has been very favorably received (179 karma at the time of writing).
This response completely ignores the main point of my comment.
Please reread my comment because the whole point was that A.C.Skraeling said that criticism is accepted within some boundaries, or when expressed in suitable terms. You essentially just repeated Linch’s point except that my whole point was that Linch’s point is perfectly compatible with what A.C. Skraeling said.
Regarding Hoel’s review, you seem to have read my point as being that it was particularly good or convincing to EAs, which is incorrect. My point was that it was downvoted to −12, a karma score I associate with trollish posts, despite its content being much better than that, because of the combination of criticizing EA orthodoxy (longtermism, utilitarianism, population ethics etc) and not expressing it in a suitable manner. This makes it a decent example of what A.C.Skraeling said. You are free to disagree of course.
I did misread some parts of your original comment. I thought you were saying that criticizing WWOTF was itself an example of criticism that is beyond the bounds Skraeling was describing. But I now see that you were not saying this. My apologies. (I have crossed out the part of my comment that is affected by this misreading.)
That is not how I read your point. I interpreted you as saying that the quality of the book review justified higher karma than it received (which is confirmed by your reply). My comment was meant to argue against this point, by highlighting some serious blunders and sloppy reasoning by the author that probably justify the low rating. (-12 karma is appropriate for a post of very low quality, in my opinion, and not just a trollish post.)
Thanks for the retraction.
Regarding the Hoel piece, the fact that you highlighted the section you did and the way you analyzed it suggests to me you didn’t understand what his position was, and didn’t try particularly hard to do so. I don’t think you can truly judge whether his content is very low quality if you don’t understand it. Personally, I think he made some interesting points really engaging with some cores of EA, even if I disagree with much of what he said. I completely disagree that his content, separate from its language and tone towards EAs, is anywhere near very low quality, certainly nowhere near −12. If you want to understand his views better, I found his comments replying to his piece on why he’s not an EA illuminating, such as his response to my attempted summary of his position. But we can agree to disagree.
Edit note: I significantly edited the part of this comment talking about Hoel’s piece within a few hours of posting with the aim of greater clarity.
The highly rated posts I’ve seen so far, on the topic of X risk in particular, appear to me to typically be a product of group think. They’re typically very articulate, very polished form, highly informed on details, (ie. good academic style) but not escaping group think.
As evidence, please direct us to the writers here who have been laser focused on the critical importance of managing the pace of the knowledge explosion. Where are they? If they exist, and I sincerely hope they do (because I don’t have the authority to sell the case) I really would like to meet them.
In my opinion, group think is to some immeasurable degree built in to the fabric of academia, because academia is a business, and one does not stay in business by alienating one’s clients. Thus, to the degree the academic depends on their salary, they are somewhat imprisoned within the limits of whoever is signing their paycheck.
Here’s an example.
I can afford to persistently sell a “world without men” idea as an ambitious solution to human violence because nobody owns me, I have nothing of value at stake. Whatever the merits of such a case might be, (very clearly debatable) academics can’t afford to make that case, because the group consensus of their community will not tolerate it. And before you protest, know that I’ve already been threatened with banning on this site just for bringing the subject up.
Academia is business, and is thus governed by fear, and that is the source of group think.
If you would, please down vote this post at least 100 times, as I believe I’ve earned it. :-)
I’m looking at your profile, you have almost nothing but downvotes, but I haven’t seen you say anything dumb—just sassy. FWIW, I really like this comment.
I frequently catch myself, and I’m embarrassed to admit that, being more likely to upvote posts of users that I know. I also find myself anchoring my vote to the existing vote count (if a post has a lot of upvotes then I am less likely to downvote it). Pretty sure I’m not the only one.
Furthermore, I observe how vote count influences my reading of each post more than it should. Groupthink at its best.
I suspect if the forum hid the vote count for a month, there would be significant changes in voting patterns. That being said, I’m not sure these changes would actually influence the votesorted order of the postings—but they might. I suspect it would also change the nature of certain discussions.
Admirable honesty, well done.
Why was this downvoted?
Because the voting system is in place to encourage high school students to participate in EA discussion. If you were to say something like “I still think Britney Spears is cool” then you’re gonna get down voted, so I’d try to avoid that topic if you can.
This time it’s me who downvoted. The first part (high school students) doesn’t seem close to being true, and the second (Britney Spears) is not related at all to the discussion?
Thank you for not being anonymous, and for explaining your down vote. That’s all I’ve been requesting from the beginning. I agree my colorful language was an imprecise description of the situation.
PS: Holy cow, I got −24 from just 6 votes. That’s awesome. I predict I will soon be the king of down votes! High school systems require high school participation.
Thank you so much, you’ve said what I’ve been thinking, better than I’ve been saying it.
Maybe this is helpful, not sure.
At least part of the issue may be the academic roots of EA. Academics turn intellectual inquiry in to a business, which introduces some competing agendas in to the process. Academics often like to pose themselves as rebels, but I think it’s closer to the truth to say that they are somewhat imprisoned within the group consensus of academic culture. You know, if you’re trying to put your kids through college using your salary as a professor, you might have to sidestep controversial ideas that could get you in trouble with whoever is writing your paycheck.
Point being, there may be some built-in aversion to unusual ideas, which then gets fed in to the reputation voting system.
To me it seems more like EA’s STEMlord-ism and roots in management consultancy, and its consequent maximiser-culture, rejection of democracy, and heavy preference for the latter aspect of the explore-exploit tradeoff.
“Number go bigger” etc. with a far lower value placed on critical reason, i.e. what the number actually is.
Orthodoxy is very efficient, you just end up pointed in the wrong direction.
I do think it’s reasonable to feel frustrated by your experience commenting on this post. I think you should have been engaged more respectfully, with more of an assumption of good faith, and that a number of your comments shouldn’t have been so heavily downvoted. I do also agree with some of the concerns you’ve raised in your comments and think it was useful for you to raise them.
At the same time, I do think this comment isn’t conducive to good conversation, and the content mostly strikes me as off-base.
The EA community doesn’t have its roots in management consultancy. Off the top of my head, I can’t think of anyone who’s sometimes considered a founding figure (e.g. Singer, Parfit, Ord, MacAskill, Yudkowsky, Karnofsky, Hassenfeld) who was a management consultant. Although the community does have some people who were or are management consultants, they don’t seem overrepresented in any interesting way.
At least on the two most obvious interpretations, I don’t think the EA community rejects democracy to any unusual degree. If you mean “people involved in EA reject democracy as a political system,” then I think I’ve literally never heard anyone express pro-autocracy views. If you mean “organizations in the EA space reject directly democratic approaches to decision-making,” then that is largely true, but I don’t think it’s in any way a distinctive feature of the community. I think that almost no philanthropic foundations, anywhere, decide where to give money using anything like a popular vote; I think the same is generally true of advocacy and analysis organizations. I’d actually guess that EA organizations are actually somewhat more democratic-leaning than comparable organizations in other communities; for example, FTX’s regranting program is both pretty unusual and arguably a bit “more democratic” than other approaches to giving away money. (If you mean something else by “rejection of democracy,” then I apologize for the incorrect interpretations!)
Lastly, I don’t think the EA community has an unusually heavy preference for the exploit end of the explore-exploit trade-off; I think the opposite is true. I can’t think of any comparable community that devotes a larger amount of energy to the question “What should we try to do?”, relative to actually trying to do things. I think this is actually something that turns off a lot of entrepreneurial and policy-minded people who enter the community, who want to try to accomplish concrete things and then get discouraged by what they perceive as a culture of constant second-guessing and bias against action.
For example, although I’m on balance in favor of the current strong upvote system, I agree it also has important downsides. And although I’m pretty bearish on the value of standard academic peer-review processes, I do think it’s really useful for especially influential reports to be published alongside public reviews from subject matter experts. For example, when it publishes long reports, OpenPhil sometimes also publishes open reviews from subject matter experts; I think it would be great to see more of that, even though it’s costly.
On the other hand, even though I don’t like the term, I do think it’s fair to say there’s an unusually large “STEMlord-ism” undercurrent to the culture. People often do have much more positive impressions of STEM disciplines (+econ and the more technical parts of analytic philosophy), relative to non-STEM disciplines. I think this attitude isn’t necessarily wrong, but I do think you’re correct to perceive that it’s there.
This is pretty far afield from what the post is about, but to me the most natural reason why someone might say EA rejects democracy are neither of the two interpretations you mentioned, but rather that EAs are technocrats suspicious of democracy, to quote Rob Reich:
I upvoted since I also thought Ben’s claims in that section was too strong.
That said, I think “suspicious of democracy” seems fairly extreme as a way to describe it. I think some EAs are healthily skeptical that democracy is the best possible governance mechanism (or more controversially, best realistically attainable governance mechanisms).
I would certainly consider myself one of them. I think we should generally have a healthy degree of skepticism towards our existing institutions, and I don’t see clear reasons why we should privilege the “democracy” hypothesis over technocracy or more futuristic setups, other than general conservatism (“Chesterton’s fence”) preferences/heuristics. In contrast, we have substantially more evidence for the benefits of democracies over monarchies or other autocratic systems.
I do think the track record where so-called elite people overestimate the efficiency gains of less free systems is suboptimal (LOL at the 1950s economists who thought that the Soviet Union will be more productive than the US). But I don’t think bias arguments should be dominant.
Every time the issue of taxes comes up, it’s a very popular opinion that people should avoid as much taxes as possible to redirect the money to what they personally deem effective. This is usually accompanied by insinuations that democratically elected governments are useless or harmful.
While it is true that aid and charity in general tend to be far from democratic, it is also widely accepted that they often cause harm or just fail to have an effect—indeed, this is the basis for our very movement. There are also many known cases where bad effects were the result of lack of participation by the recipients of aid. So it’s not enough to be “no less democratic than other charity orgs”. I believe we should strive to be much more democratic than that average—which seems to me like a minority view here.
I’m assuming you’re right about the amount of democracy in other non-profits, but the situation in my country is actually different. All non-profits have members who can call an assembly and have final say on any decision or policy of the non-profit.
Thanks for the thoughtful comment!
I do think that this position—“EA foundations aren’t unusually undemocratic, but they should still be a lot more democratic than they are”—is totally worthy of discussion. I think you’re also right to note that other people in the community tend to be skeptical of this position; I’m actually skeptical of it, myself, but I would be interested in reading more arguments in favor of it.
(My comment was mostly pushing back against the suggestion that the EA community is distinctly non-democratic.)
I’ve never heard of this—that sounds very like a really interesting institutional structure! Can I ask what you’re country you’re in, or if there’s anything to read on how this works in practice?
The first part of this does seem like a pretty common opinion to me—fair to point that out!
On the second: I don’t think “democratic governments are useless or harmful” is a popular opinion, if the comparison point is either to non-democratic governments or no government. I do think “government programs are often really inefficient or poorly targeted” and “governments often fail to address really important issues” are both common opinions, on the other hand, but I don’t really interpret these as being about democracy per se.
One thing that’s also complicated, here, is that the intended beneficiaries of EA foundations’ giving tend to lack voting power in the foundations’ host countries: animals, the poor in other countries, and future generations. So trying to redirect resources to these groups, rather than the beneficiaries preferred by one’s national government, can also be framed as a response to the fact that (e.g.) the US government is insufficiently democratic: the US government doesn’t have any formal mechanisms for representing the interests of most of the groups that have a stake in its decisions. Even given this justification, I think it probably would still be a stretch to describe the community tendency here as overall “democratic” in nature. Nonetheless, I think it does at least make the situation a little harder to characterize.
At least speaking parochially, I also think of these as relatively mainstream opinions in the US rather than opinions that feel distinctly EA. Something I wonder about, sometimes, is whether cross-country differences are underrated as a source of disagreement within and about the EA community. Your comment about how non-profits work in your country was also thought-provoking in this regard!
I don’t disagree, but I think the discussion is not as simple. When it comes to “legitimate” EA money, I think it would be much better to have some mechanism that includes as many of the potential beneficiaries as possible, rather than one national government. I just view tax money as “not legitimate EA money” (Edit: and I see people who do want to avoid taxes, as wanting to subvert the democratic system they’re in in favor of their own decisionmaking).
I live in Israel. A short Google search didn’t turn up much in terms of English language information about this, other than this government document outlining the relevant laws and rules. The relevant part of it is the chapter about the institutions of an Amuta(=Israeli non-profit), starting page 9.
In practice, since members have to be admitted by already existing bodies of the non-profit, the general assembly can be just the executive board and the auditor(s), and thus be meaningless. I’m sure this happens often (maybe most of the time). In particular, EA Israel (the org) has very few members. But I’ve been a member of a non-profit with a much larger (~100 people) general assembly in the past.
You can draw some parallels between the general assembly and a board of directors (Edit: trustees? I don’t know what the right word is). On the other hand, you can also draw parallels between the executive board and a board of directors—since in many (most?) cases, including EA Israel, the actual day-to-day management of the non-profit is done by a paid CEO and other employees. So the executive board makes strategy decisions and oversees the activity, and doesn’t implement it itself. Meaning it’s kind of a board of directors, which still answers to a possibly much larger general assembly.
Thank you for providing an excellent example of how one should down vote, if that is what you’re doing. Not meaning to put words in your mouth, just applauding a reasoned challenge.
To be clear, though, I also don’t think people should feel like they need to write out comments explaining their strong downvotes. I think the time cost is too high for it to be a default expectation, particularly since it can lead to getting involved in a fraught back-and-forth and take additional time and energy that way. I don’t use strong downvotes all that often, but, when I do use them, it’s rare that I’ll also write up an explanatory comment.
(Insofar as I disagree with forum voting norms, my main disagreement is that I’d like to see people have somewhat higher bars for strong downvoting comments that aren’t obviously substanceless or norm-violating. I think there’s an asymmetry between upvotes and downvotes, since downvotes often feel aggressive or censorious to the downvoted person and the people who agree with them. For that reason, I think that having a higher bar for downvotes than for upvotes helps to keep discussions from turning sour and helps avoid alienating people more than necessary.)
Ok, no problem, thanks for sharing that. For me, without explanations the entire voting system up and down generates entirely worthless information. With explanations then there is an opportunity to evaluate the quality of the votes.
To be fair, I’ve been using forums regularly since they first appeared on the net, and this is probably the most intelligent forum I’ve ever discovered, which I am indeed quite grateful for. Perhaps the reason I’ve complained about the voting system is that, in my mind, it contaminates what is otherwise a pretty close to perfect site. The contrast between near perfection, and high school level popularity contest gimmickry offends my delicate aesthetic sensibility. :-)
Ha! STEMlord-ism. Good one! Though I noticed that the anonymous click happy hordes who can’t be bothered to explain their votes have already downvoted your STEMlord-ism comment, so that must mean it’s completely wrong. :-)
Well, you seem to be even more ruthless than myself on this topic, so we should get along great. That said, I have decided to stop swimming upstream and am now devoting myself to accumulating as many down votes as possible. That way, should anyone wish to find my posts, they can simply power scroll to the bottom of any listings, and there I’ll be! :-)
The dynamics in this post seem weird. John is very well-respected within EA for his work on climate change, and having this report commissioned by Will makes it even more likely to be disseminated quickly and widely throughout the community.
In my opinion that means it’s particularly essential that thoughtful critiques are brought up earlier rather than later. Of course the report has already been reviewed by a lot of people I respect, but in general I’m in favour of people asking questions and raising concerns here, even though I would expect most concerns to have already been thought about and be relatively easily addressed, or in some cases not worth addressing.
So I’d like to encourage people to post these questions, concerns and critiques, but I think the environment in these comments hasn’t always been encouraging. People have been significantly downvoted for reasons I don’t understand, and John has in one case accused someone of misrepresenting their identity which I don’t think was helpful.
Do people agree with me that we should encourage people to post their questions and concerns here, even if you don’t agree with the specific questions? Do people agree the current environment isn’t ideal for that?
I didn’t downvote any of the criticisms but I can understand why people would downvote the following quote as it is quite close to assuming intention:
“Either you are aware that this characterisation is highly inaccurate and unfair, or you are not. If the former, I am disappointed by your (apparent) dismissiveness and willingness to mischaracterise.”
I’ve seen every question or critique be below zero at some point in the last 24 hours, not just one!
I may be missing something here, but how is ‘either you have acted in way x or way y’ “quite close” to assuming ‘x’?
The sentence was constructed to deliberately hold open both possibilities (i.e. aware or not), and you have cut off the quote before the latter of the possibilities was spelt out.
“‘Either this animal is a cat or a dog’ is quite close to assuming that the animal is a cat.”
In formal logic, a statement like “either you are aware that you are a terrible person, or you are not.” exhaustively covers all possibilities. It can’t be seen as an attack, because it is literally a tautology. However, this is not how most people read common language. This is because (if we read things only on the formal level), a clear fraction of the probability space of “I am not aware I’m a terrible person” comes from “I am not aware I’m a terrible person, because I’m not a terrible person.”
However, given the way that most humans reason, most people will in fact not interpret “either you are aware that you are a terrible person, or you are not” neutrally.
What you say is true, but is not a response to what I said.
I didn’t say Halstead was a terrible person: there is a difference between disapproving of actions and damning persons. In any case, leaving open a significant possibility space for poor intent is not in any way close to ‘assuming intent’ and if someone reads it as such, they are wrong.
The comment was not meant to be neutral, but again, disapproving of an action is not the same as assuming poor intent, never mind calling the actor a ‘terrible person’.
I’m starting to see the ways in which tone-policing is selectively employed in this community (well, Forum, at least) to shut down criticism.
I don’t think many of the people who do it are conscious of what they’re doing, but there does seem to be an assumption that strong criticism (i.e. what is necessary if something is very wrong or someone has acted badly) is by default aggressive and thus in violation of group norms.
Thus, all criticism must be stated in the most faux-friendly, milquetoast way possible, imposing significant effort demands on the critic and allowing them to be disregarded if they ever slip up and actually straightforwardly say that a bad thing is bad.
Naturally this is far more likely to be applied when the criticism is directed at big or semi-big figures in the community or orthodox viewpoints.
And we wonder why EA is so terminally upper-middle class...
I’m speaking for the moderation team right now. We enforce civility on the Forum and don’t view this property as opposed to criticism or disagreement.
That’s good to know, but I wonder how much change you personally can make. It’ll be significant, for sure, but I think a lot of this is cultural: a sort of EA-accelerated chunk of the class-coded aspects of the Hidden Curriculum.
I apologize if I was too quick to misdiagnose the issue. FWIW, I think I’d have trouble responding dispassionately to “Either you are aware that this characterisation is highly inaccurate and unfair, or you are not.” To be clear, I think I also would have trouble dispassionately responding to claims that I’m a sock puppet, and I do think it’s reasonable for you to be quite upset about this.
I agree that if someone is genuinely a terrible person, especially if that someone is a big or semi-big figure in the community, our politeness norms may make it more annoying to criticize them harshly than if we had more combative norms. I agree that this is pretty bad inasomuch as it makes things harder to unearth real problems, and this is a plausible hypothesis.
I think I still want to defend some fraction of such norms however, because I think our online norms are still fairly aggressive compared to what most people are used to offline, and I suspect if our online culture is substantially more aggressive (especially in unkind ways) than it currently is, this will make it harder for people to engage and address real problems, rather than just disengage.
Strong upvote, I thought I was going crazy. Thank you!
Thank you for doing this and congratulations!
I haven’t managed to read the full report yet unfortunately, but I have a few questions/criticisms already- sorry to move onto these so quickly, but nonetheless I do think its important. (I tried to write these in a more friendly way, but I keep on failing to do, so please don’t take the tone as too aggressive, I am really not intending it to be, it just keeps coming across that way ! Sorry (: ) :
There are no mentions of systemic or cascading risks in the report. Why is this?
You don’t seem to engage with much of the peer-reviewed literature already written on climate change and GCRs. For example: Beard et al 2021, Kemp et al 2022, Richards et al 2021. Don’t get me wrong, you might disagree or have strong arguments against these papers, but it seems to some degree like you have failed to engage with them
You don’t seem to engage with much of the more complex systems aspects of civilisation collapse/ existential risk theory. Why is this?
There are no mentions of existential vulnerabilities and exposures, and you seem to essentially buy into a broadly hazard based account. The subdivision into direct and indirect effects further seems to support this idea. In this way you seem to ignore complex risk analysis. Why is this?
You seem to broadly ignore the work that went on around “sexy vs unsexy risks” and “boring apocalypses” and the more expansive work done to diversify views of how X-Risks may come about. Why is this?
Thanks for the report, and I am sure I will have more questions the more I go through it. I guess my major concern with this sort of stuff is it is likely that this work will go down (unrelated to its quality, and I am not saying its bad) as a “canonical” work in EA, so I think you perhaps have a responsibility, even if you in the end reject some of this scholarship, to engage in a lot of this (peer reviewed) scholarship on GCRs and X-Risks that has occurred in the “third wave” research paradigm of Existential Risk Studies, and I am slightly concerned that you appear not to have engaged with this literature!
I don’t think explicit discussion of cascading risks would change the fundamental conclusions, and cascading risks are implicitly discussed at several points in the piece.
I have read the papers you mention. You will find (attempted) refutations of many of the points in those articles scattered across the report. In earlier drafts, I did have a direct response to those papers, but it is now all dealt with in different sections of the main report.
I don’t agree with the ‘everything is connected’ idea of society, such that society is incredibly sensitive to mild climatic changes. If that is what you mean by complex systems theory. And I defend that view at length in the report.
There are many many different ways of conceptually dividing up an analysis of climate risk. The direct/indirect way is conceptually exhaustive and so insofar as I have accurately covered the direct/indirect risks, I have accurately covered overall climate risk
True that I did ignore this, explicitly at least. I do not see how it would affect my conclusions. There is no indication from the climate literature that climate change would cause anything close to a boring apocalypse. Also, I think it is very obvious from study of sexy and unsexy risks that the sexy risks (bio, AI) are far far bigger than the unsexy risks.
I did engage a lot with that literature, I just don’t talk about it directly. One could also say that Beard et al and Kemp et al don’t engage with a lot of relevant literature, which I do discuss in my piece. eg Beard et al doesn’t engage with the literature suggesting that we are not going to run out of phosphorous and soil; Kemp et al doesn’t engage with the literature on assumptions about coal use in integrated assessment models.
(I have a few thoughts on this but it’s being marked as spam for some reason, possibly length. I’m going to post this as a short response and then edit in the content. Please let me know if you can see it.)
Hi John, thanks for the post!
I’ll leave an in-depth response to Gideon, but I have a few points that I think would be helpful to share. In short, your response worries me. I have tried to keep the prose below inoffensive in tone, but there is a trade-off between offensive directness and condescending obfuscation. I hope I have traced the line accurately.
You may not think significant discussion of cascading risks would change the fundamental conclusions of your report, but many researchers, often those with considerably more experience and expertise in climate risk (e.g. the IPCC), do: strongly so. Surely in a book-length report there is room for a few pages?
If you have refuted arguments, is it not academic best practice to cite the papers you respond to? In any case, if you know of and have read the papers, are we to understand that you believe many (if not most) peer-reviewed papers on Global Catastrophic and Existential climate risk are not worth mentioning anywhere in 437 pages of discussion?
This response causes me the most concern. That is simply not what complex systems theory is. Either you are aware that this characterisation is highly inaccurate and unfair, or you are not. If the former, I am disappointed by your (apparent) dismissiveness and willingness to mischaracterise. If the latter, I wonder how you could have done anything close to sufficient research into one of the foundational components of many studies of climate risks.
It is true that there are many conceptual frameworks for climate risk, and in a study of any topic you are generally expected to state, explain, and justify your conceptual framework. This is especially true when the framework you use (i.e. that of the Techno-Utopian Approach) has been strongly critiqued, for instance in Democratising Risk (Cremer and Kemp, 2021), another highly consequential paper you do not appear to have engaged with or cited. The dichotomy of ‘direct’ and indirect’ risks may be exhaustive, but this is not the only criterion for an adequate theoretical framework. To be somewhat, but logically coherently glib, we could make the same argument for categorising phenomena according to whether their names contained an odd or even number of letters.
I also disagree with this point, especially the final sentence, but there is little to engage with: simply assertions. Let us agree to disagree.
Beard et al. and Kemp et al. are each less than 5% of the length of your piece. Of course they cover less ground. There is a difference between a 10- or 20-page paper not mentioning every single caveat in every single work they cite, and one (1) failing to substantively engage with or even cite almost all GCR-specific climate research, (2) not explicitly stating nor justifying one’s methodology in the face of strong critique, and (3) disregarding (in complex systems studies) a massive component of studies of climate risk, wider GCR (e.g. Fisher and Sandberg 2022), and the studies of Earth-system dynamics in general without explanation or justification.
Do you expect to subject this work to peer-review, and if not, why?
The work was reviewed by experts, as I discuss in the other comment.
I do discuss tipping points at some length. I don’t see how the idea of cascading risks would change my substantive conclusions at all. If you want to argue that cascading risks would in fact affect my conclusions, I would be happy to have that debate.
In fairness to me, the Kemp et al paper was only published a couple of weeks ago, so I couldn’t include it in the report. I think much of that paper is incorrect, and the reasons for that are discussed at length in the report. The conclusions of the Beard et al and Richards et al paper are, in my view, refuted mostly in section 5 of my report. If you have a criticism of that section, which largely leans on the latest IPCC report, I would be happy to have that discussion
I have read the Richards et al complex systems paper. It contains the following diagram purporting to show how climate change could cause civilisational collapse
I am open to the possibility that my argument that climate change will not destroy the global food system is wrong. I am happy to discuss substantive criticisms of those arguments. I do not see one in the Richards paper, or in what you have said.
Your critique here seems to me to miss the mark, as illustrated by your own example. If I am assessing biorisk and categorise viruses according to whether they have an odd or even number of letters, then so long as I got my risk assessment right for the odd and even numbered letter viruses, I would have actually evaluated biorisk. I don’t know whether I am taking a ‘techno-utopian approach’ but I thought the Cremer and Kemp paper was not very good and I am not alone in thinking that (it’s also not peer reviewed, if that is the criterion we are using). As I have said, I seldom depart from the IPCC in the report. If you think I do, which of my arguments do you think are wrong?
It’s a bit weird to argue that a 400+ page report is radically incomplete without making any arguments and then to criticise my response as just making assertions. Which of my substantive arguments do you disagree with and why?
It is true that those papers are short but they also do not engage with literature that is inconsistent with most of their main claims. They lean heavily on the idea of planetary boundaries, which is extremely controversial and I argue against at length in the report.
Given the review process was not like normal peer review, would it be possible to have a public copy of all the reviewers comments like we get with the IPCC. This seems like it may br important for epistemic transparency
Indeed, knowing what I know of some of the reviewers Halstead named I am very curious to see what the review process was, what their comments were, and whether they recommended publishing the report as-is.
I’ve always been quite confused about attitudes to scholarly rigour in this community: if the decisions we’re making are so important, shouldn’t we have really robust ways of making sure they’re right?
About planetary boundaries:
Leaving aside the discussions on the specific value and/or variable used to measure a specific boundary -which the authors themselves caveat that may be temporal until finding better ones-, isn’t most of the controversy due to critiques conflating planetary boundaries and tipping points?
META: This + additional comments below from Halstead are strongly suggestive of bad-faith engagement: lazy dismissal without substantive engagement, repeated strawman-ing, Never Play Defense-ing, and accusing his critics of secretly being sockpuppet accounts of known heretics so their views can be ignored.
On the basis of Brandolini’s Law I am going to try to keep my replies as short as I can. If they seem insubstantial, it is likely because I have already responded to the point under discussion elsewhere, or because they are responding to attempts to move the conversation away from the original points of criticism.
I have specific criticisms to make, and I would like to see them addressed rather than ignored, dismissed, or answered only on the condition that I make a whole new set of criticisms for Halstead to also not engage with.
I suggest the reader read Halstead’s response before going back to Gideon’s and my comments. It was useful for me.
Climatic tipping points, cascading risks, and systemic risk are different things and you (hopefully) know it.
If you have refuted arguments made by relevant papers, why didn’t you cite them?
I’m not sure I understand your argument here: you are not under any obligation to discuss opposing perspectives on climate risk because a different paper on climate risk did not explicitly refute an argument that you would go on to make in the future?
In any case, this is not at all what Gideon or I said Your lazy and factually inaccurate dismissal of complex systems theory remains lazy and factually inaccurate. I am not sure where this point about Richards et al. comes into it.
You would have evaluated biorisk if the only possible use of a methodology was making sure you had a comprehensive categorisation system, which (as I hope you know) is not at all true.
I don’t really see how I have not made any arguments here. I suppose I could ask someone if it’s possible to write ‘Please cite your sources.’ in first-order logic.
I would like to hear your justification for how Beard et al, Richards et al, and Kemp et al all lean heavily on the idea of planetary boundaries, and how, if this was true, it would be relevant.
However, I doubt this would go anywhere. I suspect this is simply yet another way of ignoring people who disagree with you without thinking too hard, and relying on the combination of your name-recognition and the average EA’s ignorance of climate change to buy you the ‘Seems like he knows what he’s talking about!’-ness you want.
The moderation team feels that this is unnecessarily hostile and rude, and violates Forum norms. This is a warning; please do better in the future.
How would you prefer people to react when someone acts in bad faith?
What aspects of this comment fall outside those bounds?
Writing this as a moderator, but only expressing my own view.
Accusing someone of acting in bad faith on a public forum can be very damaging to the person, and it’s very easy to be mistaken about such a characterization. Even if the person is acting in bad faith, it might escalate things and make it hard to deal with the underlying problem well.
Instead, it would be better to go through the moderation and community health channels, which you can do by flagging comments/posts or by contacting us directly.
I respect this for being a substantive critique and have upvoted, even though it does read as pretty harsh to me.
I do think the way this comment is written might make it hard to respond to. I wonder if it would be easier to discuss if either (a) you made this comment a separate post that you linked to (it’s already long enough, I reckon) or (b) you split it into 3-4 individual comments with one important question or critique in each, so that people can discuss each separately? My preference would be for (a) personally, especially if you have the time to flesh out your concerns for a less expert audience!
I was worried about the harshness aspect but to be frank there are only so many ways to say that someone in a position of power and influence has acted with negligence.
Perhaps these could also be useful things to do (thought given the afore-mentioned herd-downvoting I doubt that (a) would receive sufficient good-faith engagement to be worth writing.
(b) could be useful for facilitating small-scale discussion, but I haven’t seen any indication that there are people who want to or are trying to do that, e.g. with a comment saying ‘On point #4...’
In any case, I have seen far longer comments than mine and comments with more questions and less elaboration than Gideon’s get dozens of upvotes before.
These criticisms (and I’m discussing both your response and Karthik’s here, as well as a more general pattern) appear to only be brought up when the EA big boys are being criticised: I doubt if Gideon had asked five complimentary questions he would have received anything close to such a negative reaction.
This does remind me of a lot of the response to Democratising Risk: Carla and Luke were told that the paper was at once too broad and too narrow, too harsh and yet not direct enough: anything to dismiss critique while being able to rationalise it as a mere technical application of discursive norms.
It seems like my concern was unwarranted anyways as John already responded directly to each of your points!
Yes and no in my opinion haha but I see your point
I think some of the criticism of your paper with Kemp was due to it being co-authored with Phil Torres, who has harassed and defamed many people (including me) because he thinks they have frustrated his career aims
Accusing anonymous or pseudonymous Forum accounts of being someone in particular (or doxing anyone) goes against Forum norms. We have reached out to John Halstead to ask that he refrain from doing so and that he refrain from commenting more on these threads.
My comment above was vague. Just a note to clarify: by “on these threads” we meant threads involving A.C.Skraeling. In our message to John Halstead, we wrote: “refrain from commenting on the existing threads with A.C.Skraeling.”
What are you even talking about?
I am not Cremer and it seems like an odd act of ego-defence to assume that there is only one person that could disagree with you.
I have no idea what you mean about Phil Torres: he clearly needs to take a chill pill but ‘harassment’ seems strong. Perhaps I’ve missed something. ‘Frustrated his career aims’?
In any case, Torres wasn’t a co-author of Democratising Risk, though I agree that he would probably agree with a lot of it.
Even if all of your implicit points were true, why on Earth would co-authorship with someone who had defamed you be grounds to offer reams of contradictory critiques to critical works while making none of the same critiques to comparable [EA Forum comments, but whatever] written pieces that do not substantially disagree with the canon.
I just assumed you were Cremer because you kept citing all of her work when it didn’t seem very relevant.
Perhaps the authors of the paper would like to share how much Torres contributed that paper and how that might have influenced the reception of the paper
I generally think it’d be good to have a higher evidential bar for making these kinds of accusations on the forum. Partly, I think the downside of making an off-base socket-puppeting accusation (unfair reputation damage, distraction from object-level discussion, additional feeling of adversarialism) just tends to be larger than the upside of making a correct one.
Fwiw, in this case, I do trust that A.C. Skraeling isn’t Zoe. One point on this: Since she has a track record of being willing to go on record with comparatively blunter criticisms, using her own name, I think it would be a confusing choice to create a new pseudonym to post that initial comment.
I think this is fair. I shouldn’t have done it and am sorry for doing so
I strongly agree—if someone has a question or concern about someone else’s identity, I think they should either handle it privately or speak to the Forum team about their concerns.
I think to some degree this level of accusations is problematic and to some degree derails an important conversation. Given the role a report like this may play in EA in the future, ad hominem and false attacks on critiques seem somewhat problematic
jumping in here briefly because someone alerted me to this post mentioning my name: I did not comment, I was not even aware of your forum post John, (sorry I don’t tend to read the EA forum), don’t tend to advertise previous works of mine in other peoples comments sections and if I’d comment anywhere it would certainly be under my own name
That is a rather odd assumption to make given that two of the issues under discussion were X-risk methodology and EA discourse norms in response to criticism.
Also I think it’s worth noting that you have once again ignored most of the criticism presented and moved to the safer rhetorical ground of vague insinuations about people you don’t like.
‘Never Play Defense’, anyone?
I’m interested to see your in depth response to me
I meant ‘I’ll leave the in-depth response to Gideon’. What you say speaks for itself: if Halstead presented this at a climate science org these would be some of the first questions asked and I’m puzzled (+ a bit weirded out, to be frank) as to why they’re getting such a hostile response.
(Case in point for my comment about downvoting, community hierarchy, and groupthink, below)
I strongly upvoted this because it was at −4 karma when I saw it and that seems way too low. That said, I understand the frustration people feel at a comment like this that would lead them to downvote. It raises far too many questions for the OP to answer all at once, and doesn’t elaborate on any of them enough for the OP to respond to the substance of any claim you make. This is the kind of comment that is very hard to answer, regardless of its merit.
Perhaps that’s fair, certainly the asking too many questions part. I am less sure that it doesn’t expand enough, because I would like to give John credit to suggest he knew what bits of the literature he was excluding. More generally, I think my concern is a post like this may quickly establish itself as “orthodoxy” so I wanted to raise my concerns as early as possible, but perhaps I should have waited a bit of time to do a more comprehensive response. Perhaps I will learn for next time
To be fair a ‘comprehensive’ response would include even more questions, so I’m not confident there’s any way to win here.
Yes I am also very worried about the orthodoxy point; EA is often a closed citation loop, where a small number of people and organisations cross-cite one another and ignore outside (‘non-value aligned’) work. Most reading lists are absolutely dominated by ~5 names, sometimes a few more.
Halstead, as a semi-big name at a prominent organisation (and, for better or worse, the movement’s de facto authority on climate change) is extremely likely to have his work accepted into the canon without significant challenge from climate experts (with training in climate science and policy, rather than philosophy...).
Thus, a fresh crop of undergraduates on will be told that climate is no big deal compared to sexier and more EA-friendly stuff like AI without ever being aware of all the climate-related GCR work Halstead doesn’t engage with (or even mention). I suspect, perhaps uncharitably, that this is because most of it disagrees with him. This in turn is partially because it has to be peer-reviewed by people selected on the basis of their expertise in climate risk, rather than EA value-alignment.
This lack of internal critique is probably because EA talks down climate so much (not least due to the influence of Halstead) that there simply aren’t very many climate-focused people around, and those that are around know the kind of response they get when they speak out of turn (see above haha).
I love so much of EA but for a community so focused on epistemics we really are bad at accepting criticism, especially when it’s directed at the big boys.
The report was reviewed by various people with expertise in various different aspects of climate change. The reviewers are pasted at the bottom of this comment.
The criticism raised by GIdeon seems to be that it doesn’t cite some studies that take an extreme stance on climate risk relative to mainstream climate scientists and climate economists. I discuss many of the claims made in these papers at considerable length. If you disagree with some of my substantive claims, then I would be happy to discuss them.
I don’t think my report is outside the mainstream of IPCC science. I can’t think of any substantive claims that are inconsistent with the latest IPCC report, with the exception of my criticism of the Burke et al (2015) paper and the ecosystem collapse stuff.
The reviewers for the report are below, though they may not agree with everything I have written.
Matthew Huber, Professor, Dept. of Earth, Atmospheric and Planetary Sciences, Purdue University
Dan Lunt, Professor of Climate Science, Bristol University
Jochen Hinkel, Head of Department of Adaptation and Social Learning at the Global Climate Forum
R. Daniel Bressler, PhD Candidate in Economics at Columbia
Cullen Hendrix, Professor at the Korbel School of International Studies, University of Denver
Andrew Watson, Royal Society Research Professor at the University of Exeter
Peter Kareiva, Pritzker Distinguished Professor in Environment & Sustainability, UCLA
Christina Schädel, Assistant Research Professor Center for Ecosystem Sciences and Society, Department of Biological Sciences, Northern Arizona University
Joshua Horton, Research Director, Geoengineering, Keith Group, Harvard
Laura Jackson, UK Met Office
Keith Wiebe, Senior Research Fellow at the International Food Policy Research Institute
Matthew Burgess, Assistant Professor, Department of Environmental Studies, University of Colorado Boulder
David Denkenberger, Assistant Professor of Mechanical Engineering at University of Alaska Fairbanks
Peter Watson, Senior Research Fellow and Proleptic Senior Lecturer, School of Geographical Sciences, Cabot Institute for the Environment, University of Bristol
Goodwin Gibbins, Research Fellow, Future of Humanity Institute, University of Oxford
Linus Blomqvist, Senior Fellow at Breakthrough Institute, PhD candidate in Environmental Economics and Science at UC Santa Barbara
Luca Righetti, Research Fellow, Open Philanthropy Project
Johannes Ackva, Climate research lead, Founders Pledge
James Ozden, Extinction Rebellion
This is good, though offering comments on various sections of a google doc is of course a very different exercise to full and blind peer-review.
Did any of the reviewers notice that you had not mentioned (almost?) any climate-related GCR papers? If so, what was your response to them?
As per your comments about complex systems above, please do not dismissively mischaracterise the views of your critics. This is the kind of thing an average forum user would get hammered for, please do not try to get away with it just because you know you can.
If you discuss their arguments, why didn’t you cite them? If the X-risk climate corpus takes an ‘extreme’ stance by and large, is that not the kind of thing you would expect to see discussed in a >400 page report on climate change X-risk?
Even to the extent that this report is within the IPCC mainstream, notwithstanding, for instance:
The complete absence of systems perspectives (even just to justify your rejection, something I, to be frank, would expect in an undergraduate dissertation)
Lack of consideration of vulnerability, exposure, or cascading disasters
Silent disregard for Reisinger et al.’s discussion of the concept of risk
...it is well-known that the IPCC must moderate its conclusions and focus on better-case scenarios for political reasons, i.e. so as to not be written off as alarmist. You know this, because it is mentioned in Climate Endgame and discussed at length by Jehn et al.
This is another rather important issue in climate risk scholarship you would expect to see mentioned in a work this long.
“it is well-known that the IPCC must moderate its conclusions and focus on better-case scenarios for political reasons, i.e. so as to not be written off as alarmist”
As a climate scientist reading this, I just thought I’d pick up on that and say I have not got that impression from reading the reports or conversations with my colleagues who are IPCC authors. I’ve not seen any strong evidence presented that the IPCC systematically understates risks—there are a couple of examples where risks were perhaps not discussed (not clearly underestimated as far as I’ve seen), but I can also think of at least one example where it looked to me like IPCC authors put too much weight on predictions of large changes (sea ice in AR5). (This is distinct from the thought that the IPCC doesn’t do enough to discuss low-likelihood, high-impact possibilities, which I agree with.)
It might be good to zoom out here and get a sense of what the criticism is here. I am being criticised for not citing four papers. One of them is by you and Kemp, is not peer-reviewed and is not primarily about climate change. The other one is Kemp et al 2022 which was published two weeks before I published my report so I didn’t have time to include discussion of it. The other papers I am being criticised for not mentioning are Beard et al and Richards et al. If you want to explain to me why the points they raise are not addressed in my report, I would be happy to have that discussion.
The Jehn et al papers make claims which are wrong. It is blatantly not true to anyone who knows anything about climate change that the climate science literature ignores warming of more than 3ºC.
For those who haven’t read the full comments section, Halstead has decided that I am Carla Zoe Cremer.
Democratising Risk is a preprint, no?
Democratising Risk is not primarily about climate change, but it is about X-risk methodology. You have written a piece about X-risk. Scholarly works generally require a methodology section, and scholars are expected to justify their methodology, especially when it is a controversial one. This is advice I would give to any undergraduate I supervised.
It is true that Kemp et al. 2022 has not been published for long, so you can be excused for not discussing it at length. It seems odd to have not mentioned it at all though: two weeks is not a huge amount of time, but enough to at least mention by far the most prominent work of climate GCR work to date.
If you discuss Beard’s and Richards’ points, why don’t you cite them? In any case, justification for the lack of substantive engagement seems like something you need to offer, rather than me.
In any case, the lack of mention of most climate-specific GCR work is not the only thing you have been criticised for: please scroll up to see Gideon’s original comment if you like.
Jehn et al., do not say that climate science literature ignores warming of more than 3C, they say that it is heavily under-represented.
Again, please stop lazily mischaracterising the views of your critics.
I don’t know if this repeated strawman-ing is accidental or not: if accidental, please improve your epistemics, if not, please try to engage in good faith.
This is an interesting point of view, that you should have mentioned and justified, as any student would be expected to in an essay, rather than simply pretending that criticisms do not exist
I can see where you’re coming from here but I don’t think the specifics really apply in this case.
There are many questions to raise about this google doc, and it seems fair to the reader to ask them all in one place rather than drip-feeding throughout a tree of replies and reply-replies. If responding to them all would take up too much of Halstead’s time, he can say so, no?
There’s not usually very much to elaborate when it comes to questions of omission: x is an important aspect of climate risk, Halstead has not mentioned x.
I suppose you could add the implicit points (studies of topics should include or at least mention the important aspects of those topics, space wasn’t a constraint, Halstead knows what the terms mean, etc.) but that’s unnecessary in 99% of conversations and not a standard we expect anywhere else.
(Edit: it seems my fears were right, lol)
Thanks for posting this Gideon, I shared similar issues to you but didn’t make a reply because I feared the it would would be dismissed or ignored. It is gratifying to see that John has replied, but epistemically concerning that your entirely reasonable criticisms are being so heavily downvoted: at present you average 1 point from 13 votes.
These are critiques you would expect anyone with a background in climate risk to make and I don’t see any good reason for them to have been dismissed by so many fellow EAs. Could any of the downvoters explain their decision?
Yes, whatever the subject, whatever the thread, would down voters please explain their vote. How are authors supposed to respond to and maybe accommodate down voter’s concerns if down voting remains a secret anonymous procedure containing no useful information beyond “don’t like it”? If clicking on things and running is what works for someone, consider Facebook. Thanks.
I disagree with that. Downvotes are often valuable information, and requiring people to explain all downvotes would introduce too high a bar for downvoting.
In all cases perhaps, but it is strange to see objections that would be super obvious top-of-the-head stuff in climate circles dismissed out of hand here.
(Also can someone who knows more about the Forum than me explain how this reply has 51 points from 13 votes? Even if strong-upvotes count as double this is extremely inflated. Are the totals extremified or something? Is it multiplicative?)
I wouldn’t characterise it as dismissing out of hand.
What would you call it?
I suppose all I have to say is that I often see very reasonable critiques downvoted through the floor without explanation worryingly often.
I haven’t theorised very much about the cause, but the phenomenon correlates suspiciously well with substantive or strong criticism of prominent figures within EA.
If this perception is accurate, it does not seem like good epistemic practice.
(This one has 14 points from 3 votes? Do three strong-upvotes produce 14 overall karma? Why?)
I’m flattered to be called a prominent figure in EA, but I think that is not really true. If people want to criticise the substantive claims in the report, I am happy to have that discussion and I think people on the Forum would appreciate it
You may think this, but (some) people on the Forum clearly do not.
I think this strongly contributes to groupthink.
People will subconsciously adapt their views to match the majority to some extent, and assume that a post or comment has the rating it does for a reason. This is exacerbated by the [issues around hierarchy and hero-worship EA sometimes has.](https://forum.effectivealtruism.org/posts/DxfpGi9hwvwLCf5iQ/objections-to-value-alignment-between-effective-altruists)
Hi Zoe, what is your proposed alternative to a karma system?
I presume that you are assuming I am Zoe Cremer here. I am not Zoe (Carla? Which is her actual first name?) and I have never met her, but feel free to assume only one person has issues with EA norms if you want. That post has 200 upvotes: some people must have agreed with her, even if you didn’t.
Based on Cremer’s recent statements in and around the MacAskill profile in the New Yorker she seems to be completely worn out by EA and has largely lost interest: presumably not someone who would dedicate very much time to getting into EA Forum comment wars?
This isn’t just an issue with the karma system (though artificially magnifying the ratings of somewhat popular comments so that 7 votes can produce a rating of over 25 is definitely an odd choice) it’s a cultural issue. Why did you ignore these aspects and focus the most technical issue?
Great to see such a detailed, focused, and well-researched analysis of this topic, thank you. I haven’t yet read beyond the executive summary yet other than a skim of the longer report, but I’m looking forward to doing so.
Can you make your model of indirect risks accessable to the public? Its asking for access. Thanks a lot.
Also, why do you assume that “most of the risk of existential catastrophe stems from AI, biorisk and currently unforeseen technological risks.”? My impression from earlier in the chapter is that you are essentially drawing the idea you can essentially ignore other potential causes from the Precipice. Is this correct?
Moreover, this assumption only seems true if you assume an X-Risk will come as a single hazard. If it is, say, a cascading risk, cascading to civilisational collapse then extinction, then the idea these are the biggest risks should be questioned. Simultanously, if you view it as a multi-pulsed thing, say civilisational collapse from one hazard or a series of hazards or cascades, and then followed by whatever may (slowly) make us extinct- once civilisation is collapsed its easier for smaller hazards to kill us all, then once again the primacy of these hazards reduces. Only if you take a reductive view that sees extinction as primarily due to direct, single or near single, hazards that kill everyone or basically everyone, can this model be valid.
Of course, you do talk a little about multipulsed, subextinction risks followed by recovery being harder, but not in much detail. In particular, you claim that extreme climate change may make civilisational recovery from collapse much harder, but then don’t seem to deal in detail with this question, which may be considered to be highly important, particularly if we think civilisational collapse is considerably more likely than extinction. Moreover, you suggest that “there is some
chance of civilisational collapse due to nuclear war or engineered pandemics,” essentially suggesting other causes of civilisational collapse that are less direct, and therefore could be made more likely due to climate change, are negligable. This assumption should be stated and evidenced, and yet you seem to include no sources on this.
Moreover, you state (uncited) that “the main indirect effect is Great power Conflict.” Whats your source for this claim, and why are you so certain of this that you are confident that you can discount other indirect effects? This feels like the assumptions once again should be supported;
If this is the case, then I might say that relying on the (in my opinion) rather reductive, hazard-centric, simple risk assessment model of Ord etc. is our crux of disagreement. This is why I would say from my (still moderately limited unfortunatly) reading of the report, it appears that most of your facts are in order, however a lot of what I think that it is very bad that you fail to mention (systemic risk, cascading risk, vulnerabilities, exposures, complex risk assessments (I don’t use this to suggest my way is inherently intellectually superior than yours, as indeed it is a plausible position to hold that X-Risks may emerge out of epistemically simplier more direct more “simple” risks) etc) originates out of this hazard-centric approach. I won’t overthrow a paradigm in a single comment, and I won’t even try, but do please tell me if you agree with me that this is the crux of our disagreement. Moreover, whilst in a previous comment to me you have said you have argued for this methodology of viewing X-Risks at length in the piece, I am yet to find such an argument. If you could point me to where you think you make this argument in the piece, I will reread that section, or I may have missed it (apologies if I have). If not, it feels this approach needs considerably greater justification.
I have more comments/criticisms which I will post in other comments, but certainly on this indirect risk things, these are my questions.
The model should be shared now.
Yes that is correct re my assessment of the other existential risks. I’m taking a view similar to Toby Ord and I suppose the rest of the EA community about where the main risks are. Of course, my main goal in the report is not to make this substantive case; I largely take it as given.
I don’t really see how viewing climate change as a cascading risk would change the overall risk assessment. If you argue that climate change is a large cascading risk then you would have to think that climate would play an important role in starting the cascade from collapse to extinction. I don’t see how it could do that and explain why at length in the report. Can you lay out a concrete scenario that sketches this cascading risk worry that isn’t already discussed in the report?
The report does suggest that climate change would make civilisational recovery harder but for plausible levels of warming, it would not be a large barrier to recovery and this should be clear from the substantive discussion in the report
The whole report is about whether climate change could lead to civilisational collapse or something close to it. What other mechanisms do you have in mind that are not already discussed in the report?
The influence of climate change on great power war or war more generally seems like the most obvious indirect risk of climate change that could make a substantial difference to the scale of climate change. It is often argued that climate change is a threat multiplier for conflict risk. I discuss the literature on this at length. What other indirect risks do you think might be comparably important?
It does seem that you think that viewing climate change as a cascading risk would make a large difference to my conclusion. I don’t understand what you think this cascading risk actually is that is not already discussed in the report.
I’m not sure which comment you are referring to? I argued that the direct/indirect approach is conceptually exhaustive, which is trivially true.
“I don’t really see how viewing climate change as a cascading risk would change the overall risk assessment. If you argue that climate change is a large cascading risk then you would have to think that climate would play an important role in starting the cascade from collapse to extinction. I don’t see how it could do that and explain why at length in the report. Can you lay out a concrete scenario that sketches this cascading risk worry that isn’t already discussed in the report?”
Having read the report, I am still unclear where in the report you lay out this substantive case. Could you please point this out to me, and I will be happy to reread it as I must have missed it. Also note I don’t just refer to cascading risks, but to systemic risks, existential vulnerabilities and exposures etc. Please show me where in your report you make a substantive case against these ideas as well. Thanks!
Moreover, cascading risks may only be to civilisational collapse, and may not even get you to extinction. If, as you suggest, climate change makes recovery harder, this may be a major problem from an X-Risk perspective. I agree, its unlikely a cascade would directly lead to extinction, but if it leads to major societal collapse (which your piece also doesn’t seem to define), and recovery is harder, this may be enough to pose an X-Risk.
“The report does suggest that climate change would make civilisational recovery harder but for plausible levels of warming, it would not be a large barrier to recovery and this should be clear from the substantive discussion in the report”
The report suggests this but doesn’t, as far as I can tell from having read the report, make this case particularly substantively. Also, in the section where this seems to be discussed most at length “subsequent collapse”, there doesn’t seem to be any citations If you could point me to what sources you have used to show that it shouldn’t pose a large barrier to recovery, this would be nice. You suggest it should be clear from the report, so if you can point me to where in the report this should be clear from, that would be great. Apologies if I am being stupid and have just missed something obvious in the report.
“The whole report is about whether climate change could lead to civilisational collapse or something close to it. What other mechanisms do you have in mind that are not already discussed in the report?”
How about a scenario where a multitude of factors eg climate related damages, civil conflict, interstate conflict, bioweaponary, natural disasters and economic collapse all work in concert with each other? What I am trying to suggest is that by shutting down the possibilities to bio and nuclear war, you reduce the role that climate change could play in bringing about collapse.
“The influence of climate change on great power war or war more generally seems like the most obvious indirect risk of climate change that could make a substantial difference to the scale of climate change.”
Saying something is the most obvious isn’t evidence or a justification. Your report is 400 pages long, I am pretty sure you have space to justify this core part of your methodological approach. Also, just because one thing is the “most” obvious doesn’t mean others aren’t worthy of consideration. Also, I am often very unclear what an indirect risk means, which again you don’t seem to define in your report. If you could define this for me, I would be happy to answer your question.
Sorry if some of this is unclear. However, I really think a lot of your key ideas could so with better citation/definition. I also think that ignoring a lot of these concepts which are common in the literature, and then putting the burden on me to hash out the arguments in a comment on my weekend, rather than actually addressing these concepts, even if you were to reject them, in your 400 page report, is a little odd. Nonetheless, thanks for taking the time to respond to my comments thus far, and apologies if I have missed anything in the report- it is very long and I read it late at night
I think it might help to make this discussion more concrete if you gave an example of what you mean by a cascading risk. It’s hard to defend the arguments in the report when I’m not sure what you are saying I have missed in my analysis. I talk about risks to the food system, and the spillover effects that might come from that (eg conflict), I talk about purported effects on crime, I talk about drought, I talk about tipping points etc. What is the casual story you have in mind?
The substantive discussion is the outline all of the various impacts that I have discussed and summarising the literature on economic costs, which tends to find costs of 4ºC are on the order of 5% of GDP. Unless something is radically missing from these analyses, I’m not sure how climate change could make a large difference to the chance of recovery from collapse.
I discuss the potential impacts of climate damages, civil conflict, interstate conflict and the economic impact of climate change at considerable length. Even if these all work in concert with each other, my substantive conclusion is unaffected. I also explicitly discuss the possibility that climate change will cause the use of bioweapons in the report.
I don’t shut down the possibility, I argue against it at considerable length.
You have made a series of conceptual criticisms of the report. I have said that my conceptual approach is exhaustive, which is true, but you seem to think this is unsatisfactory. I don’t think it is unreasonable for you to explain to me what you think I have missed.
A direct climate impact is an impact of climate change for which the proximal cause of the damage is not human-on-human interaction. An example would be something like heat stress deaths or crop failures from drought. An indirect climate impact is an impact for which the proximate cause is human-on-human damage, but for which the ultimate cause is climate change. An example would be crime, conflict or undermines institutions.
I think appeals to common sense and what is obvious are often permissible in arguments. Which indirect effect do you think is more important?
I was expecting the discussion to be more like ‘hear is why you are wrong about emissions/climate sensitivity/runaway greenhouse/impacts on the food system/impacts on conflict/...’
So sorry for the lateness of this reply, I have been super busy, and this reply will also only be short as I ma very busy. It would be good to organise a meeting to chat about this at some point if your interested.
On cascading risks, I tink a good recent discussion of cascading risk is found in https://www.nature.com/articles/s41467-021-25021-8 . A plausible causal story for how climate change leads to a cascade may be as follows. This is obviously flawed and incomplete and clearly needs more study:
To respond to growing threats from climate change, adaptation measures (often technological) will be put in place. However, these adaptation measures, such as physical defences, are often very fragile. Whilst it is possible agricultural production increases, this would also likely be due to adaptation measures, with new things introduced likely to be less resilient due to lack of experience. Moreover, as certain regions get agriculture more badly hit, it is possible you see a few increasing agricultural hubs as those places less effected by climate change/could adapt quicker. This further increases vulnerability
One or a number of critical nodes in this system are hit by a hazard, either one made more likely by climate change, or by another hazard. Sychrnonous failure causes diversion of resources to these areas, and the economic shock makes it harder to sustain adaptation infrastructure, which may then start to fail
Climate impacts cause increased civil unrest. This causes greater economic and political insecurity, making greater diversion of resources away from climate adaptation and towards combating the symptomns of this. The resources exerted can’t match with the pressures on them, both environmental, climatic, social, political, economic. A near complete collapse of a mid-sized economy ensues, with global reprucussions
Meanwhile, climate change has made an engineered pandemic more likely by increasing the number of omnicidal actors. One of these engineered pandemics occur killing 1% of the worlds population, sychronous with a major drought, causing global economic collapse. In response to the drought, a water war between mid-sized powers ensues, leading to the involvement of major powers. Sanctions and embargoes between the powers lead to further economic damage. This is combined by significant food shortages from the drought, and socio-economic damage from the pandemic.
Political tensions rise in India, leading to mass protests. The failure of the Indian government to relieve a Muslim majority region of the famine causes tensions, a police massacre causes this to go into outright rebellion. The Indian government collapses, with major global economic consequences.
Climate impacts continue to scale across the world, with the economic damage from the famine, pandemic and collapse of the Indian government spreading. In the subsequent economic collapse, a major economy defaults on its debts, leading to further ewconomic turmoil. Civil unrest breaks out in another major economy (say Germany), once again causing economic collapse. The new Indian government, crippled and weakened and still under pressure from the civil war, promises to defend its water resources at all costs, however China, also suffering from a drought, cuts off the source of Brahmaputra. A tactical nuclear weapon is set off. Whilst this doesn’t lead to global nuclear war, in the aftermath of this, markets collapse
With a collapsing global economy and mounting climate damages, adaptation measures cannot be meaningfully carried out, as maintences can’t be afforded. This causes more and more damage.
This is the example of a cascade. I didn’t take it all the way to collapse/ extinction, but I think you could probably carry it on yourself. This is just A scenario I just came up with. Its probably not the most plausible. But there are thousands of such scenarios, each which climate change is a key factor in causing, interacting in different ways. this is what I mean by cascade.
On the seperate point of your indirect vs direct framing, it seems like my issues with it are two fold. I tend to think indirect risk is just so much larger a subset of things than direct risk (involves many of the causes of hazards, as well as I assume all exposures and vulnerabilities?), and yet I don’t think your treatment of it gives it this necessary depth. So maybe in theory your framework is well defined, but in practice the category of indirect risk is like “the rest” and so doesn’t seem to give a useful structure for defining what these key impacts are. I think, even if unintentional, the rhetoric of employing such a device privelages direct risks hugely, which I tend to think leads nearly inevitably to your conclusion due to this privelaging.
When you say you expected other criticisms, I can understand why you may be frustrated with my focus on the meta-issues. But I broadly don’t have much of a problem with what you say, and certainly don’t on any areas outside my area of expertise. Its what you don’t say, and the methodology you have used that lets you get there, that worries me. Thats why I have focused on these meta-issues, because in terms of where I think this piece goes substantially wrong, I think its that.
As I have said previously, I think what you miss out is important as you are in a really unique place in the community by no fault of your own. There are probably few people deferred to in EA more than you are, so the worry if you miss things out, or if your methodology is wrong (and as I am sure you will admit no one is perfect) is this will get propograted as orthodoxy through the community. I know you tend to not think of yourself as this sort of person, and I don’t think these epistemically deeply unhealthy dynamics are your fault.
Anyway, sorry I couldn’t do a more substantive reply, I am super busy, and it certainly doesn’t seem like forum comments are the most constructive to this discussion. Would you like to have a chat about this at some point so we can really hear each others persepctives?
Thanks for the comment, it was interesting to have examples.
By chance, do you have some documentation on cascading risks caused by shortfalls on energy production? Or more data on what would cause the economy to collapse? I’m looking for this since I have made some posts on energy depletion and trying to keep updating.
By the way, I find the lack of answer by John rather worrying, especially as this seems to be a crucial point you’re making, especially in our interconnected world. Did you manage to chat with him?
Given the degree to which you have highlighted how experts have commented and reviewed the piece, will you, for the sake of intellectual transparency, commit to publishing all this expert feedback like the IPCC does. I think this may really help.
I said this in a subcomment, and it (worryingly) got significantly downvoted. It is a worrying sign for a community if a call for intellectual transparency (which is a key norm in EA) is downvoted just because the writer (ie me) has been critical of the piece.
I have great respect for you as an academic and an EA, and I trust that you will agree that such intellectual transparency is a useful norm, and if possible commit to publishing the commentsand reviews that those who reviewed the publication sent! The worry in the above paragraph is certainly not directed at you, and I have all the confidence that you are and will remain committed to maintaining EA as an as transparent space as possible
All the best
I will ask the experts if I can share their feedback. I did ask a couple of them to do this but after a long review process they didn’t respond so I decided not to ask the other experts if I could share theirs as I thought it would be weird to have comments on some parts but not others. Maintaining interest in the process from experts can be difficult because they sank a lot of time into reviewing the report and have other things to do so there is a risk of over-asking and them not wanting to engage any more.
The reviewers for each section were as follows. Josh Horton reviewed a section on solar geoengineering which I am still in the process of revising for a later version.
Peter Watson, Goodwin and James Ozden provided comments on various sections in the report.
Without wanting to pre-empt the reviewer comments if I am allowed to provide them, there was agreement with what I had written and I accepted the vast majority of proposed revisions. I think the main disagreement was that Keith Wiebe disagreed with some of my claims about extreme warming and agriculture. I think maybe Danny Bressler moderately disagreed with my assessment of Burke et al (2015), but not completely sure.
Thanks so much for this. Did any of the reviewers (Peter Watson, Goodwin Gibbins, James Ozden perhaps?) make comments on the overall report ie your methodology, your choices of areas of inquiry etc. As this is my major criticism of your work I would really love to see the reviewers comments on your overall methodology, structure of the report etc
No I didn’t get any of that. I don’t want to put words in their mouths, but Peter overall seemed very positive. I’m less sure what Goodwin and James thought, but they didn’t say anything massively negative, though perhaps they thought it
“I don’t want to put words in their mouths, but Peter overall seemed very positive”
As Peter, just in case this should come back to bite me if misinterpreted, I just thought I’d say I could give an informed review of certain physical climate science aspects and the report seems to capture those well. I am positive about the rest as being an interesting and in depth piece of scholarship into interesting questions, but I can’t vouch for it as an expert :-)
Would you suggest the depth of your feedback was the depth of peer review? And I’m correct in saying therefore that you didn’t really review the overall methodology used etc?
I’d say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written. As mentioned, the methodology was reviewed, yes. So, this is my view, having taken on significant expert input.
A natural question is whether my report should be given less weight eg than a peer reviewed paper in a prominent journal. I think as a rule, a good approach is to try start by getting a sense of what the weight of the literature says, and then exploring the substantive arguments made. For the usual reasons, we should expect any randomly selected paper to be false. Papers that make claims far outside the consensus position that get published in prominent journals are especially likely to be false. There is also scope for certain groups of scientists to review one another’s papers such that bad literatures can snowball.
This isn’t to say that any random person writing about climate change will be better than a random peer reviewed paper. But I think there are reasons to put more weight on the views of someone who has good epistemics (not saying this is true of me, but one might think it is true of some EA researchers) and also be actually talking about the thing we are interested in—i.e. the longtermist import of climate change. Most papers just aren’t focusing on that, but will use similar terminology. e.g. there is a paper by Xu and Ramanathan which says that climate change is an existential risk but uses that term in a completely different way to EAs.
I will give some examples of the flaws of the traditional peer review process as applied to some papers on the catastrophic side of things.
A paper that is often brought up in climate catastrophe discussions is Steffen et al (2018) - the ‘Hothouse Earth’ paper. That paper has now been cited more than 2,000 times. For reasons I discuss in the report, I think it is both surprising that the paper was published. The IPCC also disagrees with it.
2. The Kemp et al 2022 PNAS paper (also written by many planetary boundaries people) was peer reviewed, but also contains several errors.
For instance, it says “Yet, there remain reasons for caution. For instance, there is significant uncertainty over key variables such as energy demand and economic growth. Plausibly higher economic growth rates could make RCP8.5 35% more likely (27).”
The cite here in note (27) is to Christensen et al (2018), which actually says “Our results indicate that there is a greater than 35% probability that emissions concentrations will exceed those assumed in RCP8.5.” i.e. their finding is about the percentage point chance of RCP8.5, not about an increase in the relative risk of RCP8.5.
Another example: “While an ECS below 1.5 °C was essentially ruled out, there remains an 18% probability that ECS could be greater than 4.5 °C (14).”
The cite here is to the entire WG1 IPCC report (not that useful for checking but that aside...) The latest IPCC report says “a best estimate of equilibrium climate sensitivity of 3°C, with a very likely range of 2°C to 5°C. The likely range [is] 2.5°C to 4°C” The IPCC says “Throughout the WGI report and unless stated otherwise, uncertainty is quantified using 90% uncertainty intervals. The 90% uncertainty interval, reported in square brackets [x to y], is estimated to have a 90% likelihood of covering the value that is being estimated. The range encompasses the median value, and there is an estimated 10% combined likelihood of the value being below the lower end of the range (x) and above its upper end (y). Often, the distribution will be considered symmetric about the corresponding best estimate, but this is not always the case. In this Report, an assessed 90% uncertainty interval is referred to as a ‘very likely range’. Similarly, an assessed 66% uncertainty interval is referred to as a ‘likely range’.
So, the 66% CI is 2.5ºC to 4ºC and the 90% CI is 2ºC-5ºC. If this is symmetric, then this means there is a 17% chance of >4ºC, and a 5% chance of >5ºC. It’s unclear whether the distribution is symmetric or not—the IPCC does not say—but if it is then the ’18% chance of >4.5ºC’ claim in climate endgame is wrong. So, a key claim in that paper - about the main variable of interest in climate science—cannot be inferred from the given reference.
3. Jehn et al have published two papers cited in Kemp et al (2022), one of which says that “More likely higher end warming scenarios of 3 °C and above, despite potential catastrophic impacts, are severely neglected.” This is just not true, but nevertheless made it through peer review. Almost every single climate impact study reports the impact of 4.4ºC. There is barely a single chart in the entire IPCC impacts report that does not report that. We can perhaps quibble over what ‘severely neglected’ means, but it doesn’t mean ‘shown in every single chart in the IPCC climate impacts book’. It is surprising that this got through peer review.
As I have said, these are just single studies. I am consistently impressed by how good the IPCC is at reporting the median view in the literature, given how politicised the whole process must be.
I also do not think there is any tendency to downplay risks in the climate science literature. If you look at studies on publication bias in climate science, they find that effect sizes in abstracts in climate change papers have a tendency to be significantly inflated relative to the main text. This is especially pronounced in high impact journals. I have also found this from personal experience. Overall, I think in some cases the risks are overstated, in some they are understated, but there is no systematic pattern.
Probably the best way to examine whether my substantive conclusions are wrong would be to raise some substantive criticisms/carry out a redteam—I would welcome this. I emphasise that if my arguments are correct, then the scale of biorisk is numerous orders of magnitude larger than climate change.
Peer review is very variable so it’s hard to say what “the depth of peer review” is. I checked the bits I was asked to check in a similar way as I would a journal article. No I didn’t myself really review the methodology. The process was also quite different from normal review in involving quite a few back-and-forth discussions—I felt more like I was helping make the work better rather than simply commenting on its quality. It also differed in that the decision about “publishing” was taken by John rather than a separate editor (as far as I know).
I would say that for all of the ‘non-EA’ reviewers, the review was very extensive, and this was also true of some of the EA reviewers (because they were more pushed for time). The non-EA expert reviewers were also compensated for their review in order to incentivise them to review in depth.
It is true that I ultimately decided whether or not to publish, so this makes it different to peer review. Someone mentioned to me that some people mean by ‘peer review’ that the reviewers have to agree for publication to be ok, but this wasn’t the case for this report. Though it was reviewed experts, ultimately I decided whether or not to publish in its final state.
Thanks for this openness, its really appreciated. Any update as to whether the reviewers are happy for their comments to be share?
So you didn’t get anyone reviewing your overall approach or methodology? Don’t you perhaps think this is a bit of an oversight given how influential this report is likely to be?
Oh sorry, I thought you meant ‘did they leave negative comments about these things’. Lots of people looked at the overall report and were free to point out things I missed.
I still don’t really understand why you have such an issue with the methodology. I took my methodology to be—pick out all of the things in the climate literature that are relevant to the longtermist import of climate change, review the scientific literature on those things, and then arrive at my own view, send it to reviewers, make some revisions, iterate.
John, with all possible respect, that is not a theoretical framework.
I think one of your major errors in this piece (as betrayed by your methodology-as-categorisation comment above), is that you have an implicit ontology of factors as essentially separate phenomena that can perhaps have a few, likely simple relationships, which is simply not how the Earth-system or social systems work.
Thus, you think that if you’ve written a few paragraphs on each thing you deem relevant (chosen informally, liberally sprinkled with assertions, assumptions, and self-citations), you’ve covered everything.
It’s all very Cartesian.
Which impacts do you think I have missed? Can you explain why the perspective you take would render any of my substantive conclusions false?
I’m not sure what you’re talking about with self-citation. When do I cite myself?
Another way to look at it is to think about the impacts including in climate-economy models. Takakura et al (2019), which is one of the more comprehensive, includes:
Heat-related excess mortality
Hydroelectric generation capacity
Thermal power generation capacity
I discuss all of those except cooling/heating demand and hydro/thermal generation capacity, as they seem like small factors relative to climate risk. In addition to that, I discuss tipping points, runaway greenhouse effects, crime, civil and interstate conflict, ecosystem collapse.
Sorry for jumping into this discussion which I haven’t actually read (I just saw this particular comment through the forum’s front page), but one thing that’s absent and I’d be interested in is desertification. I didn’t find any mention of it in the report.
I have an issue with Takakura and other models. All models I’ve seen measure climate impacts in a) a social cost of carbon, whose value is based on a pure time preference discount factor, or b) impacts by the end of the 21st century, which ignores impacts into future centuries. Both of these methods are incompatible with a longtermist ethical view.
If we wanted to get a longtermist-compatible estimate of climate damages, we would have to either calculate a social cost of carbon with a zero discount factor (except for growth-adjustments), or calculate total climate damages over hundreds of years. None of the studies I’ve seen do this. Even worse, we know that climate models are highly sensitive to the choice of discount factor, which is only possible if a large proportion of damages occur in the future, so we could be underestimating this future damage by a lot. How do you deal with this issue when studying climate change from a longtermist perspective?
Hi Karthik thanks for this comment.
On your first comment, the Takakura et al (2019) study I mention and other models estimate a climate damage function, which is independent of a discounting module. The social cost of carbon is a function of a socioeconomic module, a climate module, a damages module and a discounting module as shown in the schematic below
It is true that some models discount future costs in part with pure time preference. But I am here talking about the damage module which is the undiscounted aggregate damages, not the social cost of carbon.
Also, it is not true that all studies have a positive rate of pure time preference. The Stern review is one prominent counterexample, for instance.
I agree that some (though not all) models typically only consider impacts up to 2100. However, impacts up to 2100 are long-term relevant. If climate-economy models suggested that climate change would cause extinction or civilisational collapse or stagnation before 2100 (as some people seem to think is plausible), this would be long-term relevant. I agree that to get a complete picture, we would need to consider impacts past 2100, but we can still learn a lot from models that only go to 2100. Moreover, it would be practically impossible to build a useful complete climate-economy model running eg 1 million years into the future. It is more informative to explore how climate change might affect proxies of things that might have long-term import such as the risk of civilisational collapse or extinction. If you think we are in the hingiest or most important century, then the impacts of climate change this century are in fact the main thing that determine its long-term effects
This is untrue if the things that make this century hingey are orthogonal to climate change. If this century is particularly hingey only because of AI development and the risk of engineered pandemics, and climate change will not affect either of those things, then the impacts of climate change this century are not especially important relative to future centuries, even if this century is important relative to future centuries.
All the indirect effects of climate that you consider are great-power conflict, resource conflict, etc. I have not seen arguments that claim this century is especially hingey for any of those factors. Indeed, resource conflict and great power conflict are the norm throughout history. So it seems that the indirect effects of climate on these risk factors is not only relevant for the 21st century but all centuries afterwards.
Takakura does not have a discounting module but considering impacts only up to 2100 is functionally the same as discounting all impacts after 2100. Obviously impacts up to 2100 are relevant to longtermists—my point is that they could be a substantial underestimate of its long-term effects. And you can improve on that substantially with a model that considers 500 years or something similar. It’s a baffling dichotomy to say that you can either consider impacts up to 2100 or millions of years.
I do not think this is true. If we are at a hingey time due to AI and bio, and climate does not affect the hingeyness of this century, then it does not have much impact on the long-term.
You initially said that Takakura et al has a discounting module because it endorses pure time preference. I pointed out that this is not true. So, this seems like changing the subject
I did not say Takakura has a discounting module and this is not changing the subject. What I said was:
Takakura has the latter problem, which is my issue with it as you use it.
This doesn’t seem right as a criterion and is also counter to some examples of longtermist success. For example, the campaign to reduce slavery improved the long term by eliminating a factor that would have caused recurring damage over the long term. Climate mitigation reduces a recurring damage over the long term: if that recurring damage each year is large enough, it can be an important longtermist area. My point is that the impacts of climate in the 21st century are probably a substantial underestimate of their total long-term impact. It’s totally possible that when you account for the total impact it is still not important, but that doesn’t follow automatically from climates effect on hingeyness.
Fair enough on your Takakura point, I misread.
I’m not sure I understand your second comment. ‘hingey’ means that we are living at the most influential time ever. This includes things like value change around slavery.
My thought here would be that if climate effects (or other factors) don’t substantially reduce the rate of technological and economic progress over the 21st century, then effects after the 21st century might be likely to be pretty small because our capacity to mitigate them would be enormous. If world real GDP keeps growing at a 3% annual rate, then GDP would be at about a quadrillion dollars/year in 2100 if I’m doing my math right (of course, one might argue it’s likely to be much higher due to AGI, much lower due to slowing technological progress etc.). But that kind of enormous world output would make solutions like scaling up direct air capture to get massively negative carbon emissions feasible. In light of that, it makes a lot more sense to worry about how climate change impacts humanity’s trajectory over the next 80 years than it does to worry about what the impacts will be after 2100.
Climate-economy models factor in technological and economic progress, and yet their SCC estimates are hugely sensitive to the discount rate. The only way I can see this happening is if climate damages in the future are very large.
That assumes a relatively happy path. If there’s some other major one-off catastrophe (eg a major pandemic), the long-term effects of climate change will end up being far harder to deal with.
Thank you for writing this—looking forward to diving into the full report this weekend. Congratulations on finishing what must have been a major undertaking!
“AI: Forecasters on the community forecasting platform Metaculus think that artificial intelligent systems that are better than humans at all relevant tasks will be created in 2042.”
How do you get this from the questions’ operationalization?
I thought that was what was meant by AGI? I agree that the operationalisation doesn’t state that explicitly, but I thought it was implied. Do you think I should change in the report?
I think this strongly depends on how much weight you expect forcasters on metaculus to put onto the actual operationalization rather than the question’s “vibe”. I personally expect quite a bit of weight on the exact operationalization, so I am generally not very happy with how people have been talking about this specific forecast (the term “AGI” often seems to invoke associations that are not backed by the forecast’s operationalization), and would prefer a more nuanced statement in the report.
(Note, that you might believe that the gap between the resolution criteria of the question and more colloqiual interpretations of “AGI” is very small, but this would seem to require an additional argument on top of the metaculus forecast).
It looks like the link is broken:
I would appreciate your thoughts on the shape of the relationship between existential risk due to climate change and global warming. For example, would it be reasonable to assume it is linear, i.e. that the x-risk linked to an increase of 2 ºC relative to today’s temperature is 2 times as large as the x-risk linked to an increase of 1 ºC?
I have different understanding of moisture greenhouse based on what I’ve read. You said (oversimplifing) that the threshold for moisture greenhouse is 67C and the main risk from it is ocean evaporation.
But in my understanding 67 C is the level of moisture greenhouse climate. The climate according to some models will be stable on this level. 67 C mean temperature seems to almost lethal to humans but some people could survive on high mountains.
However, the threshold to moisture greenhouse, that is the tipping point after which the transition to the next meta-stable condition is inevitable, could be much less than at 67C and could be after just 10 C warming—you discussed this and different authors who wrote about that, but it contradicts the word “threshold” you used about moisture greenhouse.
Transition to next meta-stable climate takes according to the article about water worlds like 4-40 years.
As a result the whole catastrophic scenario of moisture greenhouse moved away from now, but it could happen in next 100 years and Ord’s estimation is justified.
Thanks for this. Someone else raised some issues with the moist greenhouse bit, and I need to revise. I still think the Ord estimate is too high, but I think the discussion in the report could be crisper. I’ll revert back once I’ve made changes
I am going to have a post about the risks of runaway global warming soon.
Action on climate change is in its infancy and forecasts cannot be relied as they cannot include unforeseen events. Given human nature ie people do what is the least inconvenient to them (ie act in their own self interest) rather than what is morally right, I’m expecting a backlash to CO2 reducing policies which I doubt the IPCC forecasts include. Will the majority of people pay for new green infrastructure and a more expensive hydrogen economy? People are already up in arms about the current energy cost rises. Will Governments stand up to the majority who want cheap energy and convenience, as they have had for the past few decades?
The only hope to counter this self interested behaviour that I can see, is to portray the fossil fuel industries as being as bad as the slave trade of the 19th 18th centuries. Both are/ were cheap sources of energy but both have or will cause great harm to humanity. (Predictions of 4 Billion people living in a tropical hot house by 2050 was reported in New Scientist a few weeks ago).
See “What else is there to say about climate change?” on trevorprew.blogspot.com
As for AI, can’t you just pull the plug out if it starts running amuck!
Over the long term, it seems reasonable to think in terms of cycles. You know, the pattern over the long term is that civilizations live, and then they fall, and then something else rises to take their place.
The Roman Empire must have seemed permanent to those living at that time, but the empire fell, a period of darkness (in Europe) followed, and then a new even more impressive civilization rose from the ashes.
Over the long term, it seems likely that eventually this life/death cycle of human civilizations will cease, for everything in nature that has a beginning also has an end.
This is maybe getting too philosophically off topic, but it is perhaps interesting to question whether our personal mortality, civilization mortality, or species mortality is automatically a bad thing. These events are all inevitable, so there’s a logic in making peace with them by whatever means one can find.
Maturity may involve the ability to hold conflicting inclinations together within one’s mind. On the one hand it is our job as individuals and a species to struggle mightily to survive, while at the same time joyfully embracing the reality that we won’t.
Hello, since animal agriculture is one of the leading causes of climate change, I thought to emphasize that here and to let you know that if you would like to join our efforts to end animal agriculture world-wide by first changing laws in California state towards ending animal agriculture, please feel free to message me at +1 (650) 863-1550. I’m a volunteer with Compassionate Bay and Direct Action Everywhere (DxE). We need more supporters to change social norms and to pass laws towards ending animal agriculture world-wide.