Interesting, it sounds like we’re using these terms somewhat differently. I guess I’m thinking of (longtermist) macrostrategy and global priorities research as trying to find high-level answers to the questions “How can we do the most good?”, “How can we best improve the long-term future?”, and “How do we even think about these questions?”.
The unilateralist’s curse is relevant to the third question, and the insight about AI relevant to the second question.
Admittedly, while I’d count “AI may be an important cause area” as macrostrategy/GPR I’d probably exclude particulars on how best to align AI, and the boundary is fuzzy.
I share Max’s sense that those two examples fit, while particulars on how best to align AI wouldn’t.
It also seems worth noting that Bostrom/FHI were among the most prominent champions (though not the sole originator) of “AI may be an important cause area”, and Bostrom was the lead author on the unilateralist’s curse paper. Bostrom and FHI themselves describe a big part of what they do as macrostrategy. (I think they may also have coined the term, though I’m unsure.)
I agree they’re relevant to these areas! But I’m not sure that people had these areas in mind when they had these insights originally. The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it’s relevant.
The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it’s relevant.
My impression is that when most economists talk about AI as a 4th industrial revolution they’re talking about impacts much smaller than what longtermists have in mind when they talk about “impacts at least as big as the Industrial Revolution”. For example, in a public Google doc on What Open Philanthropy means by “transformative AI”, Luke Muehlhauser says:
Unfortunately, in our experience, most people who encounter this definition (understandably) misunderstand what we mean by it. In part this may be due to the ubiquity of discussions about how AI (and perhaps other “transformative technologies”) may usher in a “4th industrial revolution,” which sounds similar to our definition of transformative AI, but (in our experience) typically denotes a much smaller magnitude of transformation than we have in mind when discussing “transformative AI.”
To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new ‘growth mode’ characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).
By contrast, the so-called second and third industrial revolution (electricity, computers, …) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser’s influential blog post There was only one industrial revolution.
So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.
Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the ‘transformative AI’ impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.
--
I’m less sure about this, but I think most longtermists wouldn’t consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I’d probably put it below all of bio, nuclear, and climate change.
Interesting, it sounds like we’re using these terms somewhat differently. I guess I’m thinking of (longtermist) macrostrategy and global priorities research as trying to find high-level answers to the questions “How can we do the most good?”, “How can we best improve the long-term future?”, and “How do we even think about these questions?”.
The unilateralist’s curse is relevant to the third question, and the insight about AI relevant to the second question.
Admittedly, while I’d count “AI may be an important cause area” as macrostrategy/GPR I’d probably exclude particulars on how best to align AI, and the boundary is fuzzy.
I share Max’s sense that those two examples fit, while particulars on how best to align AI wouldn’t.
It also seems worth noting that Bostrom/FHI were among the most prominent champions (though not the sole originator) of “AI may be an important cause area”, and Bostrom was the lead author on the unilateralist’s curse paper. Bostrom and FHI themselves describe a big part of what they do as macrostrategy. (I think they may also have coined the term, though I’m unsure.)
In that case, I stand corrected on the unilateralist’s curse—I thought it was more mainstream
I agree they’re relevant to these areas! But I’m not sure that people had these areas in mind when they had these insights originally. The idea of AI as a 4th industrial revolution was pushed forward by economists, from what I can see? And then long-termists picked up the idea because of course it’s relevant.
My impression is that when most economists talk about AI as a 4th industrial revolution they’re talking about impacts much smaller than what longtermists have in mind when they talk about “impacts at least as big as the Industrial Revolution”. For example, in a public Google doc on What Open Philanthropy means by “transformative AI”, Luke Muehlhauser says:
To explain, I think the common belief is that the (first) Industrial Revolution caused a shift to a new ‘growth mode’ characterized by much higher growth rates of total economic output as well as other indicators relevant to well-being (e.g. life expectancy). It is said to be comparable to only the agricultural revolution (and perhaps earlier fundamental changes such as the arrival of humans or major transitions in evolution).
By contrast, the so-called second and third industrial revolution (electricity, computers, …) merely sustained the new trend that was kicked off by the first. Hence the title of Luke Muehlhauser’s influential blog post There was only one industrial revolution.
So e.g. in terms of the economic growth rate, I think economists talk about a roughly business-as-usual scenario, while longtermists talk about the economic doubling time falling from a decade to a month.
Regarding timing, I also think that some versions of longtermist concerns about AI predate talk about a 4th industrial revolution by decades. (By this, I mean concerns that are of major relevance for the long-term future and meet the ‘transformative AI’ impact bar, not concerns by people who explicitly considered themselves longtermists or were explicitly comparing their concerns to the Industrial Revolution.) For example, the idea of an intelligence explosion was stated by I. J. Good in 1965, and people also often see concerns about AI risk expressed in statements by Norbert Wiener in 1960 (e.g. here, p. 4) or Alan Turing in 1951.
--
I’m less sure about this, but I think most longtermists wouldn’t consider AI to be a competitive cause area if their beliefs about the impacts of AI were similar to those of economists talking about a 4th industrial revolution. Personally, in that case I’d probably put it below all of bio, nuclear, and climate change.