I’m sure there are imperfections remaining, and it remains vague, but I think most people can get a pretty good idea of what’s being pointed at there, and I think it reasonably fleshes out the vaguer, simpler definition (which I think is also useful for giving a high-level impression).
I’d disagree that most people can get a good idea of what’s being pointed at; not least for the reasons I outlined in Section 1.2 above, regarding how advanced software could already reasonably be claimed to have “precipitate[d] a transition comparable to (or more significant than) the agricultural or industrial revolution”. :)
So I’d also disagree that it “reasonably fleshes out the vaguer, simpler definition”. Indeed, I don’t think “Transformative AI” is a much clearer term than, say, “Advanced AI” or “Powerful AI”, but it often seems used as though it’s much clearer (see e.g. below).
I am not aware of places where it’s implied that “transformative AI” is a highly well-defined concept suitable for superforecasters (and I don’t think the example you gave in fact implies this), but I’m happy to try to address them if you point them out.
My point wasn’t about superforecasters in particular. Rather, my point was that the current definitions of TAI are so vague that it doesn’t make much sense to talk about, say, “the year by which transformative AI will be developed”. Again, it is highly unclear what would count and how one would resolve any forecast about it.
As for (super)forecasters, I wonder: if the concept/definition is not “suitable for superforecasters” — that is, for clearly resolvable forecasts — why is it suitable for attempts to forecast this “one number[i.e. the year by which transformative AI will be developed]”? If one doesn’t think it allows for clearly resolvable forecasts, perhaps it would be good to note that from the outset, and when making estimates such as “more than a 10% chance of ‘transformative AI’ within 15 years”.
I’d disagree that most people can get a good idea of what’s being pointed at; not least for the reasons I outlined in Section 1.2 above, regarding how advanced software could already reasonably be claimed to have “precipitate[d] a transition comparable to (or more significant than) the agricultural or industrial revolution”. :)
So I’d also disagree that it “reasonably fleshes out the vaguer, simpler definition”. Indeed, I don’t think “Transformative AI” is a much clearer term than, say, “Advanced AI” or “Powerful AI”, but it often seems used as though it’s much clearer (see e.g. below).
My point wasn’t about superforecasters in particular. Rather, my point was that the current definitions of TAI are so vague that it doesn’t make much sense to talk about, say, “the year by which transformative AI will be developed”. Again, it is highly unclear what would count and how one would resolve any forecast about it.
As for (super)forecasters, I wonder: if the concept/definition is not “suitable for superforecasters” — that is, for clearly resolvable forecasts — why is it suitable for attempts to forecast this “one number[i.e. the year by which transformative AI will be developed]”? If one doesn’t think it allows for clearly resolvable forecasts, perhaps it would be good to note that from the outset, and when making estimates such as “more than a 10% chance of ‘transformative AI’ within 15 years”.