One possibility that maybe you didn’t close off (unless I missed it) is “death by feature creep” (more likely “decline by feature creep”). It’s somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI, also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).
Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can’t comprehend the whole, making it hard to solve systemic problems. So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again. (Maybe a superintelligent AGI could figure it out? But if it started adding its own features, then maybe not even it understand what had evolved.) Complexity can come from competitive pressures, but also from technological innovations. Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).
Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are. In a way, this could be narrow AI as just another important technology, marginally complicating the world. But also, we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.
(Maybe “understand” is the wrong word to apply to narrow AI but “process in a way sufficiently opaque to humans” works and is as bad.)
One possibility that maybe you didn’t close off (unless I missed it) is “death by feature creep” (more likely “decline by feature creep”). It’s somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI, also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).
Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can’t comprehend the whole, making it hard to solve systemic problems. So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again. (Maybe a superintelligent AGI could figure it out? But if it started adding its own features, then maybe not even it understand what had evolved.) Complexity can come from competitive pressures, but also from technological innovations. Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).
Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are. In a way, this could be narrow AI as just another important technology, marginally complicating the world. But also, we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.
(Maybe “understand” is the wrong word to apply to narrow AI but “process in a way sufficiently opaque to humans” works and is as bad.)