Writings mostly about systemic cascading risks.
Richard R
Thanks a ton for your critique!
your argument can extend for any argument — any progress one makes, for instance, on disease prevention/malaria nets impacts the same outcome of economic wellbeing & thus transition + resilience against climate change.
I think a lot of these arguments remind me of the narrow vs broad intervention framework, where narrow interventions are targeted interventions meant at mitigating a specific type of risk while broad interventions include generally positive interventions like economic wellbeing, malaria nets, etc. that have ripple effects.
Your point would be that the systemic cascading lens enables us to justify any broad intervention through its nth order impacts.
But my response would be that I’m not necessarily advocating for broad interventions, especially ones that might be perceived as taking time, having unpredictable effects, and often working with very general concepts like “peace” or “education.” While I still use n-th order effects to articulate my argument (and express the importance of economic & political systems in longterm risk), I’m arguing for a very narrowly focused intervention – meant to mitigate very specific political risks through securing stable supplies of commodities necessary to live during times of general political crisis & elucidated through the systemic cascading risk framework.
I’d further that systemic cascading risks aren’t just defined by ripple effects that go through systems (then, everything would definitionally be a systemic cascading risk or benefit), but rather ripples that increase in magnitude due to system vulnerabilities, helping to further confine the definition to a narrow subset of risks.
Although my critique at large is that EA has failed to connect complexity with longtermism, I’m arguing that the systemic cascading lens fills that gap – enabling specific, tractable, and targeted interventions.
Thanks a ton for your kind response (and for being the guy that points something out). :)
“Counterfactual” & “replaceability” work too and essentially mean the same thing, so I’m really choosing which beautiful fruit I prefer in this instance (it doesn’t really matter).
I slightly prefer the word contingent because it feels less hypothetical and more like you’re pulling a lever for impact in the future, which reflects the spirit I want to create in community building. It also seems reflect uncertainty better: e.g. the ability to shift the path dependence of institutions, the ability to shape long-term trends. Contingency captures how interventions affect the full probability spectrum and time-span, rather than just envisioning a hypothetical alternate history world with and without an intervention in x years. Thus, despite hearing the other phrases, it was the first word that clicked for me, if that makes sense.
Terribly sorry for the late reply! I didn’t realize I missed replying to this comment.
I appreciate your kind words, and I think your thoughts are very eloquent and ultimately tackle a core epistemic challenge:
to my knowledge we (EA, but also humanity) don’t have many formal tools to deal with complexity/feedback loops/emergence, especially not at a global scale with so many different types of flows … some time could be spent to scope what we can reasonably incorporate into a methodology that could deal with that complexity.
I recently wrote a new forum post on a framework/phrase I used tying together concepts from complexity science & EA, arguing that it can be used to provide tractable resilience-based solutions to complexity problems.
At various points in history, some dominant class—say capitalists, men, or white Europeans—have developed a set of concepts for describing and governing social reality which serve their own interests at the expense of overall welfare. As such these concepts come to embody a particular set of values. There are multiple ways this can happen—it could be a deliberate, pernicious act by members of the dominant class; it could be the result of unconscious biases of a group of researchers; or it could be the result of a systematic selection pressure, in which ideas that favour the dominant class are more likely to gain popularity and funding and thus have a wider influence. In any case, these concepts can come to form the foundation for many kinds of institutional formation, such as the constitution of a state, popular theories of economics or political economy, or the frameworks which underlie a technical discipline and resultant technologies. Once the institutional formation becomes a regular, common sense part of society, the values which informed its foundational concepts become locked-in. It takes a substantive (often revolutionary) moment to unlock these values and bring about a different status quo.
I liked the above quote especially, as well as the ending about how historical analysis can help identify suboptimal values and assumptions embedded within EA itself.
Often times, technology development, research & development, and mathematical frameworks (e.g. game theory, microeconomics) are seen as independent of ideology. You make a convincing point that the concept of a value/ideology lock-in within technology has deep historical precedent that must be studied.
Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA.
I especially liked how the “societal friction, governance capacity, and democracy” section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current events.
A common argument is that on the margin, short and medium term AI issues are likely not neglected (as opposed to long-term issues) so one would not be able to make a big impact. I’d especially be curious about targeted, tractable interventions you believe may be worth looking into, where an additional EA on the margin would make a contingent impact or significantly leverage existing resources.
I love your thoughts on this.
Need to do more thinking on whether this point is correct, but a lot of what you’re saying about forging our own institutions reminds me of Abraham Rowe’s forum post on EA critiques:
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
I’m assuming that EA is generally not missing huge opportunities for impact. As time goes on, theoretically many grants / decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.
Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.
An entire category of risks is undervalued by EA [Summary of previous forum post]
Constantly expanding list of mistakes I made / things I would change in this post (am not editing at the moment because this is an EA criticism contest submission):
1)
Toby Ord wrote similarly that he preferred narrow over broad interventions because they can be targeted and thus most immediately effective without relying on too many casual steps.
I misinterpreted what Toby Ord was saying in The Precipice (page 268). He specifically claimed he preferred narrow/targeted over broad interventions because they can be targeted toward technological risks directly & thus can be expected to accomplish much more, compared to previous centuries. (He also made a neglectedness-based argument for targeted interventions.) I believe it was other people or other things I read (likely where the confusion comes from) that made claims about casual steps using the targeted vs broad framework.
I’m also not arguing for broad interventions, necessarily. As commonly used, the narrow vs broad framework doesn’t fully capture my argument for the importance of systemic cascading risk for multiple reasons:
While broad interventions might generally have ripple effects (n-th order effects), systemic cascading risks have n-th order effects that increase in magnitude. Rate and magnitude of cascade is more specified with the systemic cascading risk framework.
I claim that “My critique is that EA at large has failed to adequately connect complexity effects with longtermist cause area ranking & provide resolute, tractable solutions to such problems.” I’m arguing that the systemic cascading framework fills that gap, enabling specific and targeted interventions.
Broad interventions might be perceived as taking time, having unpredictable effects, and often working with very general concepts like “peace” or “education.”
While I still use n-th order effects to articulate my argument (and express the importance of economic & political systems in longterm risk), I’m arguing for a very narrowly focused intervention – meant to mitigate very specific political risks through securing stable supplies of commodities necessary to live during times of general political crisis & elucidated through the systemic cascading risk framework.
For all those reasons, I’d probably remove this quote.
2)
Refugees: ~216 million climate refugees by 2050 (World Bank Groundswell Report) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns
I didn’t realize the phrase “climate refugees” implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks to John Halstead for pointing this one out; through this quote, I unintentionally misrepresented the weight of the evidence.
If I were to edit & rephrase it, it’d look something like: “~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and refugees (figures which are much harder to quantify)”.
I disagree with the following:
But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above.
“I doubt you can make a case that’s robustly compelling...”
Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).
Naive consequentialism might incentivize you to lie to “do whatever it takes to do good”, but the impacts of lying can cascade and affect the bedrock institutional culture and systems of a movement. On aggregate, these cascading (second-order) effects will make it more difficult for people to trust each other and work together in honest ways, making the moral calculus not worth it.
Furthermore, this might have a path-dependent effect, analogous to a significant/persistent/contingent effect, where choosing this path encodes certain values in the institution and makes it harder for other community values to arise in the future.
This similarly generalizes to most “overoptimization becomes illogical” problems. Naive consequentialism & low-integrity epistemics rarely make sense in the long run anyways, so it’s just a matter of dispelling simplified, naive models of reality and coherently phrasing the importance of epistemics, diversity, and plurality through a consequentialist lens.
″...and is widely agreed upon.”
Still relatively new to the community, so I might have the wrong view on this—but I’m always remarkably surprised by how openly EAs are willing to discuss flaws in the community & are concerned about solid epistemics within the community.
E.g. recently just posted a submission to the EA criticism contest—and it’s difficult for me to imagine any other subgroup which pours $100k into a contest seriously considering and rewarding internal & external criticism about its most fundamental values and community.
I agree with the following statement:
We need the type of system you’re talking about, but we also need resiliency built into the system now.
My low-confidence rationale for including a section on modeling, scenario analysis, & its helpfulness to building resiliency is twofold:
1. Targeting & informing on-the-ground efforts: Overlaying accurate climate agriculture projections on top of food trading systems can help us determine which trade flows will be most relied on in the future and target interventions where they would be most effective and neglected—e.g. select between various agriculture interventions in different regions, lobbying for select policies or local food stocks, and tailoring food resilience research/engineering efforts towards countries and situations that will be projected to need it most.
Even for large-scale reforms, I feel like trade models can help inform the right balance of redundancy vs efficiency in a given situation.
2. Influencing risk-sensitive actors: Having accurate trade flow models can also help determine & project dangerous economic second-order consequences, creating more accurate risk analyses and thus further incentivizing governments and risk-sensitive organizations toward a coordinated systemic reform/response.
Open to have this opinion change.
Quick thoughts: People might be a lot more sympathetic to migrants (or refugees) who are of similar cultural backgrounds to them, prompting less social tension and political extremism.
As a notable example, the political effects of Arab vs Ukrainian refugees on Europe are markedly different.
I didn’t realize the phrase “climate refugees” implied involuntary cross-border migration and mistaked it for a blanket term for climate migration. Thanks for the catch!
For the sake of fairness for the EA criticism contest, I won’t edit the mistake now but maybe after the competition winners have been announced. If I were to edit & rephrase it, it’d look something like:
~216 million internally displaced climate migrants by 2050 (World Bank Groundswell Report), which can give a rough order of magnitude estimate for total cross-border climate migrants and refugees (figures which are much harder to quantify)
Systemic Cascading Risks: Relevance in Longtermism & Value Lock-In
Hey Thomas! Love the feedback & follow up form the conversation. Thanks for taking so much time to think this over—this is really well-researched. :)
In response to your arguments:
1 → 2 is generally well established by climate literature. I think the quote you provided gives me good reasons for why climate war may not be perfectly rational; however, humans don’t act in a perfectly rational way.
There are clear historical correlations that exist between rainfall patterns and civil tensions, expert opinions on climate causing violent conflict, etc. I’d like to reemphasize that climate conflict is often not just driven by resource scarcity dynamics, but also amplified by the irrational mentalities (e.g. they’ve stolen from us, they hate us, us vs them) that has driven humanity to the state of war for the many decades before. There is a unique blend of rational and irrational calculations that play into conflict risk.
2 → 3 → 4 is absolutely tenuous because our systems have rarely been stressed to this extent, so little to no historical precedence exists. However, this climate tension also acts in non-linear ways with other elements of technological development—e.g. international AGI governance efforts may be significantly harder to do between politically extreme governments and in the context of rising social tension.
To address the “greatest risk” point for 3 → 4, I agree and/or concede because my opinions have changed since the time I’ve written this as I’ve talked to more researchers in the AI alignment space.
From linkchain framing to systems thinking:
This specific 1->2->3->4 pathway causing directly existential risk may feel unlikely—and it is (alone). However, the emphasis I’d like to make is that there is a category of (usually politically-related risks) that have the potential to cascade through systems in a rather dangerous, non-linear, volatile manner.
These systemic cascading risks are better visualized not as a linear linkchain where A affects B affects C affects D (because this only captures one possible linkage chain and no interwoven or cascading effects), but rather as a graph of interconnected socioeconomic systems where one stresses a subset of nodes and studies how this stressor affects the system. How strong the butterfly effect is depends on the vulnerability and resiliency of its institutions; thus, I aim to advocate for more resilient institutions to counter these risks.
Thank you so much for this well-written article. I especially love the calculations on cost-effectiveness & comparison on newborn deaths versus other EA cause areas – your proposal clearly makes sense clearly as an alternate GiveWell cause area from a DALYs perspective.
As a student during the pandemic, I’m quite skeptical of online education – but on the other hand, the unit economics are too good for me to ignore. It only takes one decent, quality course to scale and one can have an outsized return on investment.
Therefore, I’d love to know: how do you train people online, effectively? And to the extent previous health training courses exist, how effective have they been and what are their shortcomings?
If it works out, I feel like this “med-ed-tech” model could work really well for a lot of different health professions in developing countries for making an outsized impact. Would also be curious to hear what would make certain professions easy to train online and which would be the most difficult relative to its impact.
This is very fair criticism and I agree.
For some reason, when writing order of magnitude, I was thinking about existential risks that may have a 0.1% or 1% chance of happening being multiplied into the 1-10% range (e.g. nuclear war). However, I wasn’t considering many of the existential risks I was actually talking about (like biosafety, AI safety, etc) - it’d be ridiculous for AI safety risk to be multiplied from 10% to 100%.
I think the estimate of a great power war increasing the total existential risk by 10% is much more fair than my estimate; because of this, in response to your feedback, I’ve modified my EA forum post to state that a total existential risk increase of 10% is a fair estimate given expected climate politics scenarios, citing Toby Ord’s estimates of existential risk increase under global power conflict.
Thanks a ton for the thoughtful feedback! It is greatly appreciated.
This point has helped me understand the original post more.
I feel that too many times, many EAs take current EA frameworks and ways of thinking for granted instead of questioning those frameworks and actively trying to identify flaws and in-built assumptions. Thinking through and questioning those perspectives is a good exercise in general but also extremely helpful to contribute to the motivating worldview of the community.
Still don’t believe that this necessarily means EAs “tend toward the religious”—there are probably several layers of nuance that are missing in that statement.
All in all, I’d love to see more people critique EA frameworks and conventional EA ideas in this forum—I believe there are plenty of flaws to be found.
Hey! I liked certain parts of this post and not other parts of this post. I appreciate the thoughtfulness by which you critique EA through this post.
On your first point about the AI messiah:
I think the key distinction is that there are many reasons to believe this argument about the dangers of an AGI are correct, though. Even if many claims with a similar form are wrong, that doesn’t exclude this specific claim from being right.
“Climate scientists keep telling us about how climate change is going to be so disastrous and we need to be prepared. But humanity has seen so many claims of this form and they’ve all been so wrong!”
The key distinction is that there is a lot of reason to believe that AGI will be dangerous. There is also a lot of reason to support the claim that we are not prepared currently. Without addressing that chain of logic directly, I don’t think I’m convinced by this argument.
On your second point about the EA religious tendencies:
Because religious communities are one of the most common communities we see, there’s obviously going to be parallels that exist between religious communities and EA.
Some of these analogies hold, others not so much. We, too, want to community build, network, and learn from each other. I’d love for you to point at specific examples of things EA do, from conferences to holding EA university groups, that are ineffective or unnecessary.
To perhaps a greater point of EA perhaps becoming too groupthink-y, which I think may be warranted:
I think a key distinction is that EA has a healthy level of debate, disagreement, and skepticism—while religions tend to demand blind faith in believing something unprovable. This ongoing debate on how to do the most good I personally find the most valuable in the community—and I hope this spirit never dies.
Keep on critiquing EA; I think such critiques are extremely valuable. Thanks for writing this.
Thanks a ton for your comment! I’m planning to write a follow-up EA forum post on cascading and interlinking effects—and I agree with you in that I think a lot of times, EA frameworks only take into account first-order impacts while assuming linearity between cause areas.
This comment seems to violate EA forum norms, particularly by assuming very bad faith from the original poster (e.g. “these claims smell especially untrustworthy” and “I don’t think these arguments are transparent.”). These comments made certainly have very creative interpretations of the original post.
I believe you’re aware that signatories such as Anders Sandberg and SJ Beard are not advocating for “folding EA into extinction rebellion”—an extremely outlandish claim and accusation.
Many of the comments made give untrue interpretations of the original statement: which substantively states that the very young academic field of existential risk has a lot to learn from other academic fields, such as disaster risk literature or science and technology studies. I believe this is a reasonable perspective, hence I agree with the original post.
And it’s absolutely possible to have a plurality of ideas from different academic fields while drawing a line for “homophobes, Trump supporters, and people who want China to invade Taiwan”.