I must admit that I’m quite confused about some of the key definitions employed in this series, and, in part for that reason, I’m often confused about which claims are being made. Specifically, I’m confused about the definitions of “transformative AI” and “PASTA”, and find them to be more vague and/or less well-chosen than what sometimes seems assumed here. I’ll try to explain below.
1. Transformative AI (TAI)
1.1 The simple definition
The simple definition of TAI used here is “AI powerful enough to bring us into a new, qualitatively different future”. This definition seems quite problematic given how vague it is. Not that it is entirely meaningless, of course, as it surely does give some indication as to what we are talking about, yet it is far from meeting the bar that someone like Tetlock would require for us to track predictions, as a lot of things could be argued to (not) count as “a new, qualitatively different future.”
1.2 The Industrial Revolution definition
A slightly more elaborate definition found elsewhere, and referred to in a footnote in this series, is “software (i.e. a computer program or collection of computer programs) that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.” Alternative version of this definition: “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.”
This might be a bit more specific, but it again seems to fall short of the Tetlock bar: what exactly do we mean by the term “the world’s trajectory”, and how would we measure an impact on it that is “at least as profound” as that of the Industrial Revolution?
For example, the Industrial Revolution occurred (by some definitions) roughly from 1760 to 1840, about 80 years during which the world economy got almost three times bigger, and we began to see the emergence of a new superpower, the United States. This may be compared to the last 80 years, from 1940 to 2020, what we may call “The Age of the Computer”, during which the economy has doubled almost five times (i.e. it’s roughly 30 times bigger). (In fact, by DeLong’s estimates, the economy more than tripled, i.e. surpassed the relative economic growth of the IR, in just the 25 years from 1940 to 1965.) And we saw the fall of a superpower, the Soviet Union; the rise of a new one, China; and the emergence of international institutions such as the EU and the UN.
So doesn’t “The Age of the Computer” already have a plausible claim to having had “at least as profound an impact on the world’s trajectory as the Industrial Revolution did”, even if no further growth were to occur? And by extension, could one not argue that the software of this age already has a plausible claim to having “precipitated” a transition comparable to this revolution? (This hints at the difficulty of specifying what counts as sufficient “precipitation” relative to the definition above: after all, we could not have grown the economy as much as we have over the last 80 years were it not for software, so existing software has clearly been a necessary and even a major component; yet it has still just been one among a number of factors accounting for this growth.)
1.3 The growth definition
A definition that seems more precise, and which has been presented as an operationalization of the previous definition, is phrased in terms of growth of the world economy, namely as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere [and] that it would be economically profitable to use it).”
I think this definition is also problematic, in that it fails in significant ways to capture what people are often worried about in relation to AI.
First, there is the relatively minor point that it is unclear in what cases we could be justified in attributing a tenfold acceleration in the economy to software (were such an acceleration to occur), rather than to a number of different factors that may all be similarly important, as was arguably the case in the Industrial Revolution.
For instance, if the rate of economic growth were to increase tenfold without software coming to play a significantly larger role in the economy than it does today, i.e. if its share of the world economy were to remain roughly constant, yet with software still being a critical component for this growth, would this software qualify as TAI by the definition above? (Note that our software can get a lot more advanced in an absolute sense even as its relative role in the economy remains largely the same.) It’s not entirely clear. (Not even if we consult the more elaborate “Definition #2” of TAI provided here.) And it’s not entirely irrelevant either, since economic growth appears to have been driven by an interplay of many different factors historically, and so the same seems likely to be true in the future.
But more critical, I think, is that the growth definition seems to exclude a large class of scenarios that would appear to qualify as “transformative AI” in the qualitative sense mentioned above, and scenarios that many concerned about AI would consider “transformative” and important. It is, after all, entirely conceivable, and arguably plausible, that we could get software that “would bring us into a new, qualitatively different future” without growth rates changing much. Indeed, growth rates could decline significantly, such that the world economy only grows by, e.g., one percent a year, and we could still — if such growth were to play out for another, say, 150 years — end up with “transformative AI” in the sense(s) that people are most worried about, and which could in principle entail a “value drift” and “lock-in” just as much as more rapidly developed AI.
I guess a reply might be that these are just very rough definitions and operationalizations, and that one shouldn’t take them to be more than that. But it seems that they often are taken to be more than that; for instance, the earlier-cited document that provides the growth definition appears to say about it that it “best captures what we ultimately care about as philanthropists”.
I think it is worth being clear that the definitions discussed above are in fact very vague and/or that they diverge in large and important ways from the AI scenarios people often worry about, including many of the scenarios that seem most plausible.
2. PASTA
PASTA was defined as: “AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.”
This leaves open how much of a speed-up we are talking about. It could be just a marginal speed-up (relative to previous growth rates), or it could be a speed-up by orders of magnitude. But in some places it seems that the latter is implicitly assumed.
One might, of course, argue that automating all human activities related to scientific and technological progress would have to imply a rapid speed-up, but this is not necessarily the case. It is conceivable, and in my view quite likely, that such automation could happen very gradually, and that we could transition to fully or mostly automated science in a manner that implies growth rates that are similar to those we see today.
We have, after all, automated/outsourced much of science today, to such an extent that past scientists might say that we have, relative to their perspective, already automated the vast majority of science, with scientifically-related calculations, illustrations, simulations, manufacturing, etc. that are, by their standards, mostly done by computers and other machines. And this trend could well continue without being more explosive than the growth we have seen so far. In particular, the step from 90 percent to 99 percent automated science (or across any similar interval) could happen over years, at a familiar and fairly steady growth rate.
I think it’s worth being clear that the intuition that fully automated science is in some sense inevitable (assuming continued technological progress) does not imply that a growth explosion is inevitable, or even that such an explosion is more likely to happen than not.
On “transformative AI”: I agree that this is quite vague and not as well-defined as it would ideally be, and is not the kind of thing I think we could just hand to superforecasters. But I think it is pointing at something important that I haven’t seen a better way of pointing at.
I like the definition given in Bio Anchors (which you link to), which includes a footnote addressing the fact that AI could be transformative without literally causing GDP growth to behave as described. I’m sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what’s being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression).
In this series, I mostly stuck with the simple definition because I think the discussion of PASTA and digital people makes it fairly easy to see what kind of specific thing I’m pointing at, in a different way.
I am not aware of places where it’s implied that “transformative AI” is a highly well-defined concept suitable for superforecasters (and I don’t think the example you gave in fact implies this), but I’m happy to try to address them if you point them out.
On PASTA: my view is that there is a degree of automation that would in fact result in dramatically faster scientific progress than we’ve ever seen before. I don’t think this is self-evident, or tightly proven by the series, but it is something I believe, and I think the series does a reasonable job pointing to the main intuitions behind why I believe it (in particular, the theoretical feedback loop this would create, the “modeling the human trajectory” projection of what we might expect if the “population bottleneck” were removed, and the enormous transformative potential of particular technologies that might result).
I’m sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what’s being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression).
I’d disagree that most people can get a good idea of what’s being pointed at; not least for the reasons I outlined in Section 1.2 above, regarding how advanced software could already reasonably be claimed to have “precipitate[d] a transition comparable to (or more significant than) the agricultural or industrial revolution”. :)
So I’d also disagree that it “reasonably fleshes out the vaguer, simpler definition”. Indeed, I don’t think “Transformative AI” is a much clearer term than, say, “Advanced AI” or “Powerful AI”, but it often seems used as though it’s much clearer (see e.g. below).
I am not aware of places where it’s implied that “transformative AI” is a highly well-defined concept suitable for superforecasters (and I don’t think the example you gave in fact implies this), but I’m happy to try to address them if you point them out.
My point wasn’t about superforecasters in particular. Rather, my point was that the current definitions of TAI are so vague that it doesn’t make much sense to talk about, say, “the year by which transformative AI will be developed”. Again, it is highly unclear what would count and how one would resolve any forecast about it.
As for (super)forecasters, I wonder: if the concept/definition is not “suitable for superforecasters” — that is, for clearly resolvable forecasts — why is it suitable for attempts to forecast this “one number[i.e. the year by which transformative AI will be developed]”? If one doesn’t think it allows for clearly resolvable forecasts, perhaps it would be good to note that from the outset, and when making estimates such as “more than a 10% chance of ‘transformative AI’ within 15 years”.
sidenote: There has been an argument that ‘radically transformative AI’ is a better term for the Industrial Revolution definition, given the semantic bleaching already taking place with ‘transformative AI’.
I must admit that I’m quite confused about some of the key definitions employed in this series, and, in part for that reason, I’m often confused about which claims are being made. Specifically, I’m confused about the definitions of “transformative AI” and “PASTA”, and find them to be more vague and/or less well-chosen than what sometimes seems assumed here. I’ll try to explain below.
1. Transformative AI (TAI)
1.1 The simple definition
The simple definition of TAI used here is “AI powerful enough to bring us into a new, qualitatively different future”. This definition seems quite problematic given how vague it is. Not that it is entirely meaningless, of course, as it surely does give some indication as to what we are talking about, yet it is far from meeting the bar that someone like Tetlock would require for us to track predictions, as a lot of things could be argued to (not) count as “a new, qualitatively different future.”
1.2 The Industrial Revolution definition
A slightly more elaborate definition found elsewhere, and referred to in a footnote in this series, is “software (i.e. a computer program or collection of computer programs) that has at least as profound an impact on the world’s trajectory as the Industrial Revolution did.” Alternative version of this definition: “AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.”
This might be a bit more specific, but it again seems to fall short of the Tetlock bar: what exactly do we mean by the term “the world’s trajectory”, and how would we measure an impact on it that is “at least as profound” as that of the Industrial Revolution?
For example, the Industrial Revolution occurred (by some definitions) roughly from 1760 to 1840, about 80 years during which the world economy got almost three times bigger, and we began to see the emergence of a new superpower, the United States. This may be compared to the last 80 years, from 1940 to 2020, what we may call “The Age of the Computer”, during which the economy has doubled almost five times (i.e. it’s roughly 30 times bigger). (In fact, by DeLong’s estimates, the economy more than tripled, i.e. surpassed the relative economic growth of the IR, in just the 25 years from 1940 to 1965.) And we saw the fall of a superpower, the Soviet Union; the rise of a new one, China; and the emergence of international institutions such as the EU and the UN.
So doesn’t “The Age of the Computer” already have a plausible claim to having had “at least as profound an impact on the world’s trajectory as the Industrial Revolution did”, even if no further growth were to occur? And by extension, could one not argue that the software of this age already has a plausible claim to having “precipitated” a transition comparable to this revolution? (This hints at the difficulty of specifying what counts as sufficient “precipitation” relative to the definition above: after all, we could not have grown the economy as much as we have over the last 80 years were it not for software, so existing software has clearly been a necessary and even a major component; yet it has still just been one among a number of factors accounting for this growth.)
1.3 The growth definition
A definition that seems more precise, and which has been presented as an operationalization of the previous definition, is phrased in terms of growth of the world economy, namely as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere [and] that it would be economically profitable to use it).”
I think this definition is also problematic, in that it fails in significant ways to capture what people are often worried about in relation to AI.
First, there is the relatively minor point that it is unclear in what cases we could be justified in attributing a tenfold acceleration in the economy to software (were such an acceleration to occur), rather than to a number of different factors that may all be similarly important, as was arguably the case in the Industrial Revolution.
For instance, if the rate of economic growth were to increase tenfold without software coming to play a significantly larger role in the economy than it does today, i.e. if its share of the world economy were to remain roughly constant, yet with software still being a critical component for this growth, would this software qualify as TAI by the definition above? (Note that our software can get a lot more advanced in an absolute sense even as its relative role in the economy remains largely the same.) It’s not entirely clear. (Not even if we consult the more elaborate “Definition #2” of TAI provided here.) And it’s not entirely irrelevant either, since economic growth appears to have been driven by an interplay of many different factors historically, and so the same seems likely to be true in the future.
But more critical, I think, is that the growth definition seems to exclude a large class of scenarios that would appear to qualify as “transformative AI” in the qualitative sense mentioned above, and scenarios that many concerned about AI would consider “transformative” and important. It is, after all, entirely conceivable, and arguably plausible, that we could get software that “would bring us into a new, qualitatively different future” without growth rates changing much. Indeed, growth rates could decline significantly, such that the world economy only grows by, e.g., one percent a year, and we could still — if such growth were to play out for another, say, 150 years — end up with “transformative AI” in the sense(s) that people are most worried about, and which could in principle entail a “value drift” and “lock-in” just as much as more rapidly developed AI.
I guess a reply might be that these are just very rough definitions and operationalizations, and that one shouldn’t take them to be more than that. But it seems that they often are taken to be more than that; for instance, the earlier-cited document that provides the growth definition appears to say about it that it “best captures what we ultimately care about as philanthropists”.
I think it is worth being clear that the definitions discussed above are in fact very vague and/or that they diverge in large and important ways from the AI scenarios people often worry about, including many of the scenarios that seem most plausible.
2. PASTA
PASTA was defined as: “AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement.”
This leaves open how much of a speed-up we are talking about. It could be just a marginal speed-up (relative to previous growth rates), or it could be a speed-up by orders of magnitude. But in some places it seems that the latter is implicitly assumed.
One might, of course, argue that automating all human activities related to scientific and technological progress would have to imply a rapid speed-up, but this is not necessarily the case. It is conceivable, and in my view quite likely, that such automation could happen very gradually, and that we could transition to fully or mostly automated science in a manner that implies growth rates that are similar to those we see today.
We have, after all, automated/outsourced much of science today, to such an extent that past scientists might say that we have, relative to their perspective, already automated the vast majority of science, with scientifically-related calculations, illustrations, simulations, manufacturing, etc. that are, by their standards, mostly done by computers and other machines. And this trend could well continue without being more explosive than the growth we have seen so far. In particular, the step from 90 percent to 99 percent automated science (or across any similar interval) could happen over years, at a familiar and fairly steady growth rate.
I think it’s worth being clear that the intuition that fully automated science is in some sense inevitable (assuming continued technological progress) does not imply that a growth explosion is inevitable, or even that such an explosion is more likely to happen than not.
On “transformative AI”: I agree that this is quite vague and not as well-defined as it would ideally be, and is not the kind of thing I think we could just hand to superforecasters. But I think it is pointing at something important that I haven’t seen a better way of pointing at.
I like the definition given in Bio Anchors (which you link to), which includes a footnote addressing the fact that AI could be transformative without literally causing GDP growth to behave as described. I’m sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what’s being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression).
In this series, I mostly stuck with the simple definition because I think the discussion of PASTA and digital people makes it fairly easy to see what kind of specific thing I’m pointing at, in a different way.
I am not aware of places where it’s implied that “transformative AI” is a highly well-defined concept suitable for superforecasters (and I don’t think the example you gave in fact implies this), but I’m happy to try to address them if you point them out.
On PASTA: my view is that there is a degree of automation that would in fact result in dramatically faster scientific progress than we’ve ever seen before. I don’t think this is self-evident, or tightly proven by the series, but it is something I believe, and I think the series does a reasonable job pointing to the main intuitions behind why I believe it (in particular, the theoretical feedback loop this would create, the “modeling the human trajectory” projection of what we might expect if the “population bottleneck” were removed, and the enormous transformative potential of particular technologies that might result).
I’d disagree that most people can get a good idea of what’s being pointed at; not least for the reasons I outlined in Section 1.2 above, regarding how advanced software could already reasonably be claimed to have “precipitate[d] a transition comparable to (or more significant than) the agricultural or industrial revolution”. :)
So I’d also disagree that it “reasonably fleshes out the vaguer, simpler definition”. Indeed, I don’t think “Transformative AI” is a much clearer term than, say, “Advanced AI” or “Powerful AI”, but it often seems used as though it’s much clearer (see e.g. below).
My point wasn’t about superforecasters in particular. Rather, my point was that the current definitions of TAI are so vague that it doesn’t make much sense to talk about, say, “the year by which transformative AI will be developed”. Again, it is highly unclear what would count and how one would resolve any forecast about it.
As for (super)forecasters, I wonder: if the concept/definition is not “suitable for superforecasters” — that is, for clearly resolvable forecasts — why is it suitable for attempts to forecast this “one number[i.e. the year by which transformative AI will be developed]”? If one doesn’t think it allows for clearly resolvable forecasts, perhaps it would be good to note that from the outset, and when making estimates such as “more than a 10% chance of ‘transformative AI’ within 15 years”.
sidenote: There has been an argument that ‘radically transformative AI’ is a better term for the Industrial Revolution definition, given the semantic bleaching already taking place with ‘transformative AI’.