In any case, the metacrisis is the underlying crisis driving a multitude of crises, not just ecological collapse (which is certainly bad enough) but a range of governance and security issues, alongside global economic instability and inequality within countries, a steep rise in mental health problems and a decline in social trust. It’s as if we have a civilisation-level wicked problem. In his first article on The Emergentsia, Brent Cooper writes: “The term refers to the set of root problems behind all major crises. The idea goes back at least to the 1970 Club of Rome report which describes 49 “continuous critical problems” which they also call “meta-problems.” Moreover, in a talk at Google, “Confronting the “Meta-Crisis”: Criteria for Turning the Titanic”, philosopher entrepreneur Terry Patten reflects on the need to speak to meta-crisis as “the sum of our ecological, economic, social, cultural, and political emergencies.”
I’m not overly keen on the terms meta/polycrisis. But ‘collapse’ aint great either.
At any rate, they’re all gesturing in the same general direction: civilization is a complex system composed of, reliant on, interacting with other complex systems. Many of these systems are out of equilibrium, many are under stress or degrading.
Problems in one system, e.g. energy, ripple out and have all kinds of chaotic effects in other systems, some of which can feed back into the energy system.
And this is roughly where the optimists and the pessimists go their separate ways—it usually requires a pessimistic disposition to go around finding and connecting all these horrifying little dots and perceive the ‘metacrisis’, and the foregone conclusion to a pessimist is that we’re all doomed :)
This conclusion is anathema to optimists, so the baby tends to get thrown out with the bathwater.
The reason the metacrisis is a valuable framework, to EA most of all, is that it’s a powerfully predictive model of the world—by revealing the interconnected clusterfuck of everything, it also highlights areas where successful intervention would have massive, systemwide, effects.
(And it shows you where interventions that might seem effective in a vacuum, are in effect meaningless.)
To give you a solid example of the kind of thing I’m talking about: trust is a cause area that is almost totally neglected, yet is actually a bottleneck in almost every single other cause area—inequality, nuclear proliferation, climate change, AI safety etc etc—you find/make a tool for scaling trust, you’ll basically hit the jackpot in terms of EFFECTIVE altruism.
One last point in favour of the metacrisis framework: it gives you realistic timelines. I referenced this earlier, but this really is the point I can’t hammer home enough:
The hinge of history is shorter than EA’s think.
I genuinely believe that this community/movement is the best candidate for navigating the hinge of history successfully, but I also worry that there is a lack of urgency/focus due to far too optimistic interpretations i.e. ‘the most important century’.
Based on the article you linked, it seems like ‘meta-crisis’ thinking employs a bundle of concepts that LessWrong often calls ‘Moloch’ or ‘simulacra levels’ or ‘inadequate equilibria’ or simply tradeoffs. This line of analysis attempts to use these ideas to explain failures of collective action to implement complex institutional change, and generate solutions to overcome this inertia.
I’m sympathetic to the need to address issues of governance and collective action. However, what interests me are clear problem-solution pairs with good evidence, a straightforward mechanism, and adequate information feedback to see if it’s working. “We should switch to approval voting” meets those criteria.
I’m less excited about interrogating “the very idea of ‘the economy’ or what exactly we mean by ‘money’” or the idea that “too much liberty may kill liberalism, too much voting can weaken democracies, and we don’t always understand how we understand, we tend to deny our denial, and we are struggling to imagine a new imaginary.”
You’re broadly correct that the metacrisis is in the same neighbourhood as stuff like Moloch and inadequate equilibria.
I definitely wouldn’t say that the metacrisis is a ‘governance/collective action’ issue, although that’s certainly an important piece of the problem.
what interests me are clear problem-solution pairs with good evidence, a straightforward mechanism, and adequate information feedback to see if it’s working.
I too like simple solutions with clear feeback. Who doesn’t? If only the world were so cooperative.
But… this strategy results in things like rearranging deckchairs on the Titanic; you’re almost guaranteed to miss the forest for the trees.
This is exactly why I want to bring these two cultures closer together: EAs have an incredible capacity for action and problem solving, more than any other community I’ve seen.
buuut that capacity needs to be informed by a deep macro understanding of the world, such as those who study the metacrisis possess. Otherwise—deckchairs, titanic.
And, as an aside to your aside, while you’re less excited about “what we mean by ‘money’”, I’d point out that people not knowing[1] the answer to that question has resulted in a great deal of destruction and inefficiency.
So I have a lot of questions. I’ll try to ask them one or two at a time.
It seems like you’re claiming something like this:
“Clearly, there are a bunch of emergencies, which have causes and solutions. For a lot of them, we know what the causes and solutions are, but don’t implement them. That is probably because our global institutions have big complicated effects on each other, but nobody has a very good predictive model of what the chain of cause and effect is or how to intervene in it productively. Like, if you wanted to pass a carbon tax, what would you even do?
Probably, if we studied that, we could figure out some sort of complicated way to make all these global institutions fit together way better, so that we’d be just a lot happier with life. It’s sort of like we have a medieval doctor’s understanding of how the body works, and we’d be a lot better of stopping trying to ‘treat’ most problems and start trying to study how the body works in detail. Except instead of the body, it’s ‘global institutions and culture’ and instead of medicine it’s all sorts of political/cultural/economic/scientific interventions.”
Some of these emergencies are orders of magnitude more important or urgent than others.
My first claim is that scale and context matter
e.g. an intervention in cause area X may be obvious and effective when evaluated in isolation, but in context the lives saved from X are then lost to cause Y instead.
My second claim is that many of these emergencies are not discrete problems.
Rather they are complex interdependent systems in a state of extreme imbalance, stress or degradation—e.g. climate, ecology, demography, economy
My third claim is that, yes, governance is a more-or-less universal bottleneck in our ability to engage with these emergencies.
But, my fourth claim is that this doesn’t make all of the above a governance problem. Solutions to governance do not solve these emergencies, they simply improve our ability to engage with these emergencies.
If you really really want somewhere specific to point the finger, it’s homo sapiens. There’s a great quote: “We Have Stone Age Emotions, Medieval Institutions and God-Like Technology”—E. O. Wilson
Practically, my position, informed by the metacrisis, is that:
We have less time to make a difference than is commonly believed in EA circles, and the difference we have to make has to be systemic and paradigm changing—saving lives doesn’t matter if the life support system itself is failing.
Thus interventions which aren’t directly or indirectly targeting the life support system itself can seem incredibly effective while actually being a textbook case of rearranging the deckchairs on the Titanic.
P.S. Thanks for your time and patience in engaging with me on this topic and encouraging me to clarify in this manner.
We have less time to make a difference than is commonly believed in EA circles
How much time do you think we have? My impression is that a lot of EAs at least are operating with a sense of extreme urgency over their AI timelines and expectations of risk (i.e. 10 years, 99% chance of doom). It would be informative to give a numeric estimate of X years until Y consequence, accepting that it’s imprecise.
Thus interventions which aren’t directly or indirectly targeting the life support system
So it sounds like you are an X-risk guy, which is a very mainstream EA position. Although I’m not sure if you’re a “last 1%-er,” as in weighing the complete loss of human life much more heavily than losing say 99% of human life. But it sounds like your main contention is that weird complicated environmental/economic/population interactions that are very hard to see directly will somehow lead to doom if not corrected.
Overall there’s a motte here, which is “not all interventions help solve the problem you really care about, sometimes for complicated reasons.” I’m just not sure what the big insight is about what to do, given that fact, that we’re not already doing.
95% certainty <100 years, 80% certainty <50 years, 50% certainty, <30 years... But the question is ‘how much time do we have until X?’ and for that...
So it sounds like you are an X-risk guy, which is a very mainstream EA position. Although I’m not sure if you’re a “last 1%-er,” as in weighing the complete loss of human life much more heavily than losing say 99% of human life.
This is where I diverge heavily, and where the metacrisis framework comes into play: I am a civilization x-risk guy, not a homo sapiens x-risk guy.
My timeline is specifically ‘how much time do we have until irreversible, permanent, loss of civilizational capacity[1]’
Whether humans survive is irrelevant to me[2]. What seems clear to me is that we are faced with a choice between two paradigm shifts: one in which we grow beyond our current limitations as a species, and one in which we are forever confined to them.
Technology is the deciding factor, to quote Homer Simpson - ‘the cause of, and solution to, all of life’s problems’ :p
And achieving our current technological capacity is not repeatable. The idea that future humans can rebuild is incredibly naive yet rarely questioned in EA[3].
If you accept that proposition, even if just for the sake of argument, then my emphasis on the hinge of history should make sense. This is our one chance to build a better future, if we fail then none of the futures we can expect are ones any of us would want to live in.
And this is where the insight of the metacrisis is relevant: interventions focused on the survival/flourishing of civilization itself are, from my[4] point of view, the only ones with positive EV.
What to do that we’re not already doing:
Increased focus/prioritization of:
Governance (both working with existing decision making structures, and enabling the creation and growth of new ones)
Social empowerment/‘uplift’ (thinking specifically of things like Taiwanese Digital Democracy)
Economic Innovation—the fact that we are in a situation where we are reliant on the philanthropy of billionaires is conclusive evidence that the current system is well overdue for an overhaul.
Resilience (really broad category)
The former three points are critical for this category as well: inequality, social unrest and incompetent governance are huge sources of fragility.
Domestic energy infrastructure
Domestic sustainable food production
Global health and longevity
And by global, I mean everywhere with rapidly aging demographics.
Pandemic management/prevention
I would say all of these areas are either underprioritized or, as in the case of global health, often missing the forest for the trees (literally—saving trees without doing anything about the existential threat to the forest itself).
Most notably loss of energy and capital intensive advanced technologies dependent on highly specialized workers, global supply chains and geopolitical stability (i.e. no one dropping bombs on your infrastructure) - e.g. computing.
This is a whole other argument, and I don’t really want to get into it now. This is what I’ve been trying to write a post on for a while now. I find it personally quite frustrating as I feel the burden of evidence should be on those making the extraordinary claim—i.e. that rebuild is possible.
Just a note on your communication style, at least on EA forum I think it would help if you replaced more of your “deckchairs on the Titanic” and “forest for the trees” metaphores with specific examples, even hypothetical.
For example, when you say “I would say all of these areas are either underprioritized or, as in the case of global health, often missing the forest for the trees (literally—saving trees without doing anything about the existential threat to the forest itself),” I actually don’t know what you mean. What are the forest and what are trees in this example? Like, you say “literally saving trees,” but unless you for some reason consider forest preservation to fall under the umbrella of global health, it’s not literally saving trees.
Anyway, I think I see a little more where you’re coming from, let me know if I’m misunderstanding.
You start by assuming that a civilizational collapse would be irrecoverable, and just about as bad as human extinction.
Given that assumption, you see a lot of bad stuff that could wipe out civilization without necessarily killing everybody, like a global food supply disaster, a pandemic, a war, climate change, energy production problems, etc.
Since all these potential sources of collapse seem just as bad as human extinction, you think it’x worth putting effort into all of them.
EA often prioritizes protecting human/sentient life directly, but doesn’t focus that hard on things like evaluating risks to the global energy supply except insofar as those risks stem from problems that might also just kill a lot of people, like a pandemic or ΑΙ run amok.
Overall, it seems like you think there’s a lot more sources of fragility than EA takes into account, lots of ways civilization could collapse, and EA’s only looking at a few.
You start by assuming that a civilizational collapse would be irrecoverable, and just about as bad as human extinction.
Given that assumption, you see a lot of bad stuff that could wipe out civilization without necessarily killing everybody, like a global food supply disaster, a pandemic, a war, climate change, energy production problems, etc.
Since all these potential sources of collapse seem just as bad as human extinction, you think it’x worth putting effort into all of them.
EA often prioritizes protecting human/sentient life directly, but doesn’t focus that hard on things like evaluating risks to the global energy supply except insofar as those risks stem from problems that might also just kill a lot of people, like a pandemic or ΑΙ run amok.
Overall, it seems like you think there’s a lot more sources of fragility than EA takes into account, lots of ways civilization could collapse, and EA’s only looking at a few.
Is that roughly where you’re coming from?
Yeah that’s a good summary of my position.
Just a note on your communication style, at least on EA forum I think it would help if you replaced more of your “deckchairs on the Titanic” and “forest for the trees” metaphores with specific examples, even hypothetical.
Thanks, will keep this in mind. It’s been an active (and still ongoing) effort to adjust my style toward EA norms.
Do you think civilization generally is fragile? Or just like post industrial civilization? We have seen the collapse and reconstruction of civilization to varying degrees across history. But we’ve never seen the collapse and revitalization of an industrial society. Is it specifically that you think we’ve used up some key inputs to starting an industrial civ, like maybe easily accessible coal and oil reserves or something?
Ah, I want to acknowledge that the definition of civilization is quite broad without getting too in the weeds on this point.
I heard the economist Steve Keen describe civilization as ‘harnessing energy to elevate us above the base level of the planet’ (I may be paraphrasing somewhat).
I think this is a pretty good definition, because it also makes it clear why civilization is inherently unstable—and thus fragile—it is, by definition, out of equilibrium with the natural environment.
And any ecologist will know what happens next in this situation—overshoot[1].
So all civilization is inherently fragile, and the larger it grows the more it depletes the carrying capacity of the environment.
Which brings us to industrial/post industrial civilization:
I think the best metaphor for industrial civilization is a rocket—it’s an incredibly powerful channeled explosion that has the potential to take you to space, but also has the potential to explode, and has a finite quantity of fuel.
The ‘fuel’, in the case of industrial civilization is not simply material resources such as oil and coal, but also environmental resources—the complex ecologies that support life on the planet and even the stable, temperate, climate that gave us the opportunity to settle down and form civilization.
Civilization can only form during these tiny little peaks, the interglacial periods. Anthropogenic climate change is far beyond the bounds of this cycle and there is no guarantee that it will return to a cadence capable of supporting future civilizations.
Further, our current level of development was the result of a complex chain of geopolitical events that resulted in a prolonged period of global stability and prosperity.
While it may be possible for future civilizations to achieve some level of technological development, it is incredibly unlikely they will ever have the resources and conditions that enabled us to reach the ‘digital’ tech level.
Consider that even now, under far better conditions than we can expect future civilizations to have, it is still more likely that we’ll destroy ourselves than flourish. That potential for self-destruction is unabated in future civilizations, whereas the potential for flourishing is heavily if not completely depleted.
Replying to myself with an additional contribution I just read that says everything much better than I managed:
In physics terms, the world economy, as well as all of the individual economies within it, are dissipative structures. As such, growth followed by collapse is a usual pattern. At the same time, new versions of dissipative structures can be expected to form, some of which may be better adapted to changing conditions. Thus, approaches for economic growth that seem impossible today may be possible over a longer timeframe.
For example, if climate change opens up access to more coal supplies in very cold areas, the Maximum Power Principle would suggest that some economy will eventually access such deposits. Thus, while we seem to be reaching an end now, over the long-term, self-organizing systems can be expected to find ways to utilize (“dissipate”) any energy supply that can be inexpensively accessed, considering both complexity and direct fuel use.
I would add that while new structures can be expected to form, because they are adapted for different conditions and exploiting different energy gradients, we should not expect them to have the same features/levels of complexity.
from https://cusp.ac.uk/themes/m/blog-jr-meta-crisis/
I’m not overly keen on the terms meta/polycrisis. But ‘collapse’ aint great either.
At any rate, they’re all gesturing in the same general direction: civilization is a complex system composed of, reliant on, interacting with other complex systems. Many of these systems are out of equilibrium, many are under stress or degrading.
Problems in one system, e.g. energy, ripple out and have all kinds of chaotic effects in other systems, some of which can feed back into the energy system.
And this is roughly where the optimists and the pessimists go their separate ways—it usually requires a pessimistic disposition to go around finding and connecting all these horrifying little dots and perceive the ‘metacrisis’, and the foregone conclusion to a pessimist is that we’re all doomed :)
This conclusion is anathema to optimists, so the baby tends to get thrown out with the bathwater.
The reason the metacrisis is a valuable framework, to EA most of all, is that it’s a powerfully predictive model of the world—by revealing the interconnected clusterfuck of everything, it also highlights areas where successful intervention would have massive, systemwide, effects.
(And it shows you where interventions that might seem effective in a vacuum, are in effect meaningless.)
To give you a solid example of the kind of thing I’m talking about: trust is a cause area that is almost totally neglected, yet is actually a bottleneck in almost every single other cause area—inequality, nuclear proliferation, climate change, AI safety etc etc—you find/make a tool for scaling trust, you’ll basically hit the jackpot in terms of EFFECTIVE altruism.
One last point in favour of the metacrisis framework: it gives you realistic timelines. I referenced this earlier, but this really is the point I can’t hammer home enough:
The hinge of history is shorter than EA’s think.
I genuinely believe that this community/movement is the best candidate for navigating the hinge of history successfully, but I also worry that there is a lack of urgency/focus due to far too optimistic interpretations i.e. ‘the most important century’.
Based on the article you linked, it seems like ‘meta-crisis’ thinking employs a bundle of concepts that LessWrong often calls ‘Moloch’ or ‘simulacra levels’ or ‘inadequate equilibria’ or simply tradeoffs. This line of analysis attempts to use these ideas to explain failures of collective action to implement complex institutional change, and generate solutions to overcome this inertia.
I’m sympathetic to the need to address issues of governance and collective action. However, what interests me are clear problem-solution pairs with good evidence, a straightforward mechanism, and adequate information feedback to see if it’s working. “We should switch to approval voting” meets those criteria.
I’m less excited about interrogating “the very idea of ‘the economy’ or what exactly we mean by ‘money’” or the idea that “too much liberty may kill liberalism, too much voting can weaken democracies, and we don’t always understand how we understand, we tend to deny our denial, and we are struggling to imagine a new imaginary.”
You’re broadly correct that the metacrisis is in the same neighbourhood as stuff like Moloch and inadequate equilibria.
I definitely wouldn’t say that the metacrisis is a ‘governance/collective action’ issue, although that’s certainly an important piece of the problem.
I too like simple solutions with clear feeback. Who doesn’t? If only the world were so cooperative.
But… this strategy results in things like rearranging deckchairs on the Titanic; you’re almost guaranteed to miss the forest for the trees.
This is exactly why I want to bring these two cultures closer together: EAs have an incredible capacity for action and problem solving, more than any other community I’ve seen.
buuut that capacity needs to be informed by a deep macro understanding of the world, such as those who study the metacrisis possess. Otherwise—deckchairs, titanic.
And, as an aside to your aside, while you’re less excited about “what we mean by ‘money’”, I’d point out that people not knowing[1] the answer to that question has resulted in a great deal of destruction and inefficiency.
Both in the sense that ‘normal’ people vote for nonsensical monetary policies, and decision makers propose and enact nonsensical monetary policies.
So I have a lot of questions. I’ll try to ask them one or two at a time.
It seems like you’re claiming something like this:
“Clearly, there are a bunch of emergencies, which have causes and solutions. For a lot of them, we know what the causes and solutions are, but don’t implement them. That is probably because our global institutions have big complicated effects on each other, but nobody has a very good predictive model of what the chain of cause and effect is or how to intervene in it productively. Like, if you wanted to pass a carbon tax, what would you even do?
Probably, if we studied that, we could figure out some sort of complicated way to make all these global institutions fit together way better, so that we’d be just a lot happier with life. It’s sort of like we have a medieval doctor’s understanding of how the body works, and we’d be a lot better of stopping trying to ‘treat’ most problems and start trying to study how the body works in detail. Except instead of the body, it’s ‘global institutions and culture’ and instead of medicine it’s all sorts of political/cultural/economic/scientific interventions.”
Is that roughly what you mean?
Mmmm, I’ll try my best to deconfuse.
Clearly, there are a bunch of emergencies.
Some of these emergencies are orders of magnitude more important or urgent than others.
My first claim is that scale and context matter
e.g. an intervention in cause area X may be obvious and effective when evaluated in isolation, but in context the lives saved from X are then lost to cause Y instead.
My second claim is that many of these emergencies are not discrete problems.
Rather they are complex interdependent systems in a state of extreme imbalance, stress or degradation—e.g. climate, ecology, demography, economy
My third claim is that, yes, governance is a more-or-less universal bottleneck in our ability to engage with these emergencies.
But, my fourth claim is that this doesn’t make all of the above a governance problem. Solutions to governance do not solve these emergencies, they simply improve our ability to engage with these emergencies.
If you really really want somewhere specific to point the finger, it’s homo sapiens. There’s a great quote: “We Have Stone Age Emotions, Medieval Institutions and God-Like Technology”—E. O. Wilson
Practically, my position, informed by the metacrisis, is that:
We have less time to make a difference than is commonly believed in EA circles, and the difference we have to make has to be systemic and paradigm changing—saving lives doesn’t matter if the life support system itself is failing.
Thus interventions which aren’t directly or indirectly targeting the life support system itself can seem incredibly effective while actually being a textbook case of rearranging the deckchairs on the Titanic.
P.S. Thanks for your time and patience in engaging with me on this topic and encouraging me to clarify in this manner.
How much time do you think we have? My impression is that a lot of EAs at least are operating with a sense of extreme urgency over their AI timelines and expectations of risk (i.e. 10 years, 99% chance of doom). It would be informative to give a numeric estimate of X years until Y consequence, accepting that it’s imprecise.
So it sounds like you are an X-risk guy, which is a very mainstream EA position. Although I’m not sure if you’re a “last 1%-er,” as in weighing the complete loss of human life much more heavily than losing say 99% of human life. But it sounds like your main contention is that weird complicated environmental/economic/population interactions that are very hard to see directly will somehow lead to doom if not corrected.
Overall there’s a motte here, which is “not all interventions help solve the problem you really care about, sometimes for complicated reasons.” I’m just not sure what the big insight is about what to do, given that fact, that we’re not already doing.
95% certainty <100 years, 80% certainty <50 years, 50% certainty, <30 years...
But the question is ‘how much time do we have until X?’ and for that...
This is where I diverge heavily, and where the metacrisis framework comes into play: I am a civilization x-risk guy, not a homo sapiens x-risk guy.
My timeline is specifically ‘how much time do we have until irreversible, permanent, loss of civilizational capacity[1]’
Whether humans survive is irrelevant to me[2].
What seems clear to me is that we are faced with a choice between two paradigm shifts: one in which we grow beyond our current limitations as a species, and one in which we are forever confined to them.
Technology is the deciding factor, to quote Homer Simpson - ‘the cause of, and solution to, all of life’s problems’ :p
And achieving our current technological capacity is not repeatable. The idea that future humans can rebuild is incredibly naive yet rarely questioned in EA[3].
If you accept that proposition, even if just for the sake of argument, then my emphasis on the hinge of history should make sense. This is our one chance to build a better future, if we fail then none of the futures we can expect are ones any of us would want to live in.
And this is where the insight of the metacrisis is relevant: interventions focused on the survival/flourishing of civilization itself are, from my[4] point of view, the only ones with positive EV.
What to do that we’re not already doing:
Increased focus/prioritization of:
Governance (both working with existing decision making structures, and enabling the creation and growth of new ones)
Social empowerment/‘uplift’ (thinking specifically of things like Taiwanese Digital Democracy)
Economic Innovation—the fact that we are in a situation where we are reliant on the philanthropy of billionaires is conclusive evidence that the current system is well overdue for an overhaul.
Resilience (really broad category)
The former three points are critical for this category as well: inequality, social unrest and incompetent governance are huge sources of fragility.
Domestic energy infrastructure
Domestic sustainable food production
Global health and longevity
And by global, I mean everywhere with rapidly aging demographics.
Pandemic management/prevention
I would say all of these areas are either underprioritized or, as in the case of global health, often missing the forest for the trees (literally—saving trees without doing anything about the existential threat to the forest itself).
Most notably loss of energy and capital intensive advanced technologies dependent on highly specialized workers, global supply chains and geopolitical stability (i.e. no one dropping bombs on your infrastructure) - e.g. computing.
I know how this sounds, but to me the opposite (human survival is an ultimate goal) sounds like paperclip maximizing.
This is a whole other argument, and I don’t really want to get into it now. This is what I’ve been trying to write a post on for a while now. I find it personally quite frustrating as I feel the burden of evidence should be on those making the extraordinary claim—i.e. that rebuild is possible.
Admittedly rather fringe
Just a note on your communication style, at least on EA forum I think it would help if you replaced more of your “deckchairs on the Titanic” and “forest for the trees” metaphores with specific examples, even hypothetical.
For example, when you say “I would say all of these areas are either underprioritized or, as in the case of global health, often missing the forest for the trees (literally—saving trees without doing anything about the existential threat to the forest itself),” I actually don’t know what you mean. What are the forest and what are trees in this example? Like, you say “literally saving trees,” but unless you for some reason consider forest preservation to fall under the umbrella of global health, it’s not literally saving trees.
Anyway, I think I see a little more where you’re coming from, let me know if I’m misunderstanding.
You start by assuming that a civilizational collapse would be irrecoverable, and just about as bad as human extinction.
Given that assumption, you see a lot of bad stuff that could wipe out civilization without necessarily killing everybody, like a global food supply disaster, a pandemic, a war, climate change, energy production problems, etc.
Since all these potential sources of collapse seem just as bad as human extinction, you think it’x worth putting effort into all of them.
EA often prioritizes protecting human/sentient life directly, but doesn’t focus that hard on things like evaluating risks to the global energy supply except insofar as those risks stem from problems that might also just kill a lot of people, like a pandemic or ΑΙ run amok.
Overall, it seems like you think there’s a lot more sources of fragility than EA takes into account, lots of ways civilization could collapse, and EA’s only looking at a few.
Is that roughly where you’re coming from?
Yeah that’s a good summary of my position.
Thanks, will keep this in mind. It’s been an active (and still ongoing) effort to adjust my style toward EA norms.
Do you think civilization generally is fragile? Or just like post industrial civilization? We have seen the collapse and reconstruction of civilization to varying degrees across history. But we’ve never seen the collapse and revitalization of an industrial society. Is it specifically that you think we’ve used up some key inputs to starting an industrial civ, like maybe easily accessible coal and oil reserves or something?
Ah, I want to acknowledge that the definition of civilization is quite broad without getting too in the weeds on this point.
I heard the economist Steve Keen describe civilization as ‘harnessing energy to elevate us above the base level of the planet’ (I may be paraphrasing somewhat).
I think this is a pretty good definition, because it also makes it clear why civilization is inherently unstable—and thus fragile—it is, by definition, out of equilibrium with the natural environment.
And any ecologist will know what happens next in this situation—overshoot[1].
So all civilization is inherently fragile, and the larger it grows the more it depletes the carrying capacity of the environment.
Which brings us to industrial/post industrial civilization:
I think the best metaphor for industrial civilization is a rocket—it’s an incredibly powerful channeled explosion that has the potential to take you to space, but also has the potential to explode, and has a finite quantity of fuel.
The ‘fuel’, in the case of industrial civilization is not simply material resources such as oil and coal, but also environmental resources—the complex ecologies that support life on the planet and even the stable, temperate, climate that gave us the opportunity to settle down and form civilization.
Civilization can only form during these tiny little peaks, the interglacial periods. Anthropogenic climate change is far beyond the bounds of this cycle and there is no guarantee that it will return to a cadence capable of supporting future civilizations.
Further, our current level of development was the result of a complex chain of geopolitical events that resulted in a prolonged period of global stability and prosperity.
While it may be possible for future civilizations to achieve some level of technological development, it is incredibly unlikely they will ever have the resources and conditions that enabled us to reach the ‘digital’ tech level.
Consider that even now, under far better conditions than we can expect future civilizations to have, it is still more likely that we’ll destroy ourselves than flourish.
That potential for self-destruction is unabated in future civilizations, whereas the potential for flourishing is heavily if not completely depleted.
https://biologydictionary.net/carrying-capacity/
Replying to myself with an additional contribution I just read that says everything much better than I managed:
Gail Tverberg
I would add that while new structures can be expected to form, because they are adapted for different conditions and exploiting different energy gradients, we should not expect them to have the same features/levels of complexity.
This is highly relevant to your interest in scaling trust:
https://www.lesswrong.com/posts/Fu7bqAyCMjfcMzBah/eigenkarma-trust-at-scale
Yeah :) I’m actually already trying to contribute to that project. Thanks for thinking of me when you saw something relevant though.