The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it’s relevant.
My impression is that when most economists talk about AI as a 4th industrial revolution they’re talking about impacts much smaller than what longtermists have in mind when they talk about “impacts at least as big as the Industrial Revolution”. For example, in a public Google doc on What Open Philanthropy means by “transformative AI”, Luke Muehlhauser says:
Unfortunately, in our experience, most people who encounter this definition (understandably) misunderstand what we mean by it. In part this may be due to the ubiquity of discussions about how AI (and perhaps other “transformative technologies”) may usher in a “4th industrial revolution,” which sounds similar to our definition of transformative AI, but (in our experience) typically denotes a much smaller magnitude of transformation than we have in mind when discussing “transformative AI.”
To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new ‘growth mode’ characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).
By contrast, the so-called second and third industrial revolution (electricity, computers, …) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser’s influential blog post There was only one industrial revolution.
So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.
Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the ‘transformative AI’ impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.
--
I’m less sure about this, but I think most longtermists wouldn’t consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I’d probably put it below all of bio, nuclear, and climate change.
My impression is that when most economists talk about AI as a 4th industrial revolution they’re talking about impacts much smaller than what longtermists have in mind when they talk about “impacts at least as big as the Industrial Revolution”. For example, in a public Google doc on What Open Philanthropy means by “transformative AI”, Luke Muehlhauser says:
To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new ‘growth mode’ characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).
By contrast, the so-called second and third industrial revolution (electricity, computers, …) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser’s influential blog post There was only one industrial revolution.
So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.
Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the ‘transformative AI’ impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.
--
I’m less sure about this, but I think most longtermists wouldn’t consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I’d probably put it below all of bio, nuclear, and climate change.