I very much like how careful you are in looking at this question of the burden of proof when discussing transformative AI. One thing I’m uncertain about, though: is the “most important century” framing the best one to use when discussing this? It seems to me like “transformative AI is coming this century” and “this century is the most important century” are very different claims which you tend to conflate in this sequence.
One way of thinking about this: suppose that, this century, there’s an AI revolution at least as big as the industrial revolution. How many more similarly-sized revolutions are plausible before reaching a stable galactic civilisation? The answer to this question could change our estimate of P(this is the most important century) by an order of magnitude (or perhaps two, if we have good reasons to think that future revolutions will be more important than this century’s TAI), but has a relatively small effect on what actions we should take now.
More generally, I think that claims which depend on the specifics of our long-term trajectory after transformative AI are much easier to dismiss as being speculative (especially given how much pushback claims about reaching TAI already receive for being speculative). So I’d much rather people focus on the claim that “AI will be really, really big” than “AI will be bigger than anything else which comes afterwards”. But it seems like framing this sequence of posts as the “most important century” sequence pushes towards the latter.
Oh, also, depending on how you define “important”, it may be the case that past centuries were more important because they contained the best opportunities to influence TAI—e.g. when the west became dominant, or during WW1 and WW2, or the cold war. Again, that’s not very action-guiding, but it does make the “most important century” claim even more speculative.
So I’d much rather people focus on the claim that “AI will be really, really big” than “AI will be bigger than anything else which comes afterwards”.
I think AI is much more likely to make this the most important century than to be “bigger than anything else which comes afterwards.” Analogously, the 1000 years after the IR are likely to be the most important millennium even though it seems basically arbitrary whether you say the IR is more or less important than AI or the agricultural revolution. In all those cases, the relevant thing is that a significant fraction of all remaining growth and technological change is likely to occur in the period, and many important events are driven by growth or tech change.
The answer to this question could change our estimate of P(this is the most important century) by an order of magnitude
I think it’s more likely than not that there will be future revolutions as important TAI, but there’s a good probability that AI leads to enough acceleration that a large fraction of future revolutions occur in the same century. There’s room for the debate over the exact probability and timeline for such acceleration, but I think no real way to argue for anything as low as 10%.
It seems to me like “transformative AI is coming this century” and “this century is the most important century” are very different claims which you tend to conflate in this sequence.
I agree they’re different claims; I’ve tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of “most important century.”
This post contains a few cases where I think the situation is somewhat confusing, because there are “burden of proof” arguments that take the basic form, “If this type of AI is developed, that will make it likely that it’s the most important century; there’s a burden of proof on arguing that it’s the most important century because ___.” So that does lead to some cases where I am defending “most important century” within a post on AI timelines.
More generally, I think that claims which depend on the specifics of our long-term trajectory after transformative AI are much easier to dismiss as being speculative (especially given how much pushback claims about reaching TAI already receive for being speculative). So I’d much rather people focus on the claim that “AI will be really, really big” than “AI will be bigger than anything else which comes afterwards”. But it seems like framing this sequence of posts as the “most important century” sequence pushes towards the latter.
I struggled a bit with this; you might find this page helpful, especially the final section, “Holistic intent of the ‘most important century’ phrase.” I ultimately decided that relative to where most readers are by default, “most important century” is conveying a more accurate high-level message than something like “extraordinarily important century”—the latter simply does not get across the strength of the claim—even though it’s true that “most important century” could end up being false while the overall spirit of the series (that this is a massively high-stakes situation) ends up being right.
I also think it’s the case that the odds of “most important century” being literally true are still decently high (though substantially lower than “transformative AI this century”). A key intuition behind this claim is the idea that PASTA could radically speed things up, such that this century ends up containing as much eventfulness as we’re used to from many centuries. (Some more along these lines in the section starting “To put this possibility in perspective, it’s worth noting that the world seems to have ‘sped up’” from the page linked above.)
Oh, also, depending on how you define “important”, it may be the case that past centuries were more important because they contained the best opportunities to influence TAI—e.g. when the west became dominant, or during WW1 and WW2, or the cold war. Again, that’s not very action-guiding, but it does make the “most important century” claim even more speculative.
I address this briefly in footnote 1 on the page linked above: “You could say that actions of past centuries also have had ripple effects that will influence this future. But I’d reply that the effects of these actions were highly chaotic and unpredictable, compared to the effects of actions closer-in-time to the point where the transition occurs.”
Thanks for the response, that all makes sense. I missed some of the parts where you disambiguated those two concepts; apologies for that. I suspect I still see the disparity between “extraordinarily important century” and “most important century” as greater than you do, though, perhaps because I consider value lock-in this century less likely than you do—I haven’t seen particularly persuasive arguments for it in general (as opposed to in specific scenarios, like AGIs with explicit utility functions or the scenario in your digital people post). And relatedly, I’m pretty uncertain about how far away technological completion is—I can imagine transitions to post-human futures in this century which still leave a huge amount of room for progress in subsequent centuries.
I agree that ’extraordinarily important century” and “transformative century” don’t have the same emotional impact as “most important century”. I wonder if you could help address this by clarifying that you’re talking about “more change this century than since X” (for x = a millennium ago, or since agriculture, or since cavemen, or since we diverged from chimpanzees). “Change” also seems like a slightly more intuitive unit than “importance”, especially for non-EAs for whom “importance” is less strongly associated with “our ability to exert influence”.
Agreed that we probably disagree about lock-in. I don’t want my whole case to ride on it, but I don’t want it to be left out as an important possibility either.
With that in mind, I think the page I linked is conveying the details of what I mean pretty well (although I also find the “more change than X” framing interesting), and I think “most important century” is still the best headline version I’ve thought of.
Strongly agree—the final paragraph rubbed me the wrong way because it equated “the most important century” with “people needing to take action to save the world”
I very much like how careful you are in looking at this question of the burden of proof when discussing transformative AI. One thing I’m uncertain about, though: is the “most important century” framing the best one to use when discussing this? It seems to me like “transformative AI is coming this century” and “this century is the most important century” are very different claims which you tend to conflate in this sequence.
One way of thinking about this: suppose that, this century, there’s an AI revolution at least as big as the industrial revolution. How many more similarly-sized revolutions are plausible before reaching a stable galactic civilisation? The answer to this question could change our estimate of P(this is the most important century) by an order of magnitude (or perhaps two, if we have good reasons to think that future revolutions will be more important than this century’s TAI), but has a relatively small effect on what actions we should take now.
More generally, I think that claims which depend on the specifics of our long-term trajectory after transformative AI are much easier to dismiss as being speculative (especially given how much pushback claims about reaching TAI already receive for being speculative). So I’d much rather people focus on the claim that “AI will be really, really big” than “AI will be bigger than anything else which comes afterwards”. But it seems like framing this sequence of posts as the “most important century” sequence pushes towards the latter.
Oh, also, depending on how you define “important”, it may be the case that past centuries were more important because they contained the best opportunities to influence TAI—e.g. when the west became dominant, or during WW1 and WW2, or the cold war. Again, that’s not very action-guiding, but it does make the “most important century” claim even more speculative.
Thoughts?
I think AI is much more likely to make this the most important century than to be “bigger than anything else which comes afterwards.” Analogously, the 1000 years after the IR are likely to be the most important millennium even though it seems basically arbitrary whether you say the IR is more or less important than AI or the agricultural revolution. In all those cases, the relevant thing is that a significant fraction of all remaining growth and technological change is likely to occur in the period, and many important events are driven by growth or tech change.
I think it’s more likely than not that there will be future revolutions as important TAI, but there’s a good probability that AI leads to enough acceleration that a large fraction of future revolutions occur in the same century. There’s room for the debate over the exact probability and timeline for such acceleration, but I think no real way to argue for anything as low as 10%.
I agree they’re different claims; I’ve tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of “most important century.”
This post contains a few cases where I think the situation is somewhat confusing, because there are “burden of proof” arguments that take the basic form, “If this type of AI is developed, that will make it likely that it’s the most important century; there’s a burden of proof on arguing that it’s the most important century because ___.” So that does lead to some cases where I am defending “most important century” within a post on AI timelines.
I struggled a bit with this; you might find this page helpful, especially the final section, “Holistic intent of the ‘most important century’ phrase.” I ultimately decided that relative to where most readers are by default, “most important century” is conveying a more accurate high-level message than something like “extraordinarily important century”—the latter simply does not get across the strength of the claim—even though it’s true that “most important century” could end up being false while the overall spirit of the series (that this is a massively high-stakes situation) ends up being right.
I also think it’s the case that the odds of “most important century” being literally true are still decently high (though substantially lower than “transformative AI this century”). A key intuition behind this claim is the idea that PASTA could radically speed things up, such that this century ends up containing as much eventfulness as we’re used to from many centuries. (Some more along these lines in the section starting “To put this possibility in perspective, it’s worth noting that the world seems to have ‘sped up’” from the page linked above.)
I address this briefly in footnote 1 on the page linked above: “You could say that actions of past centuries also have had ripple effects that will influence this future. But I’d reply that the effects of these actions were highly chaotic and unpredictable, compared to the effects of actions closer-in-time to the point where the transition occurs.”
Thanks for the response, that all makes sense. I missed some of the parts where you disambiguated those two concepts; apologies for that. I suspect I still see the disparity between “extraordinarily important century” and “most important century” as greater than you do, though, perhaps because I consider value lock-in this century less likely than you do—I haven’t seen particularly persuasive arguments for it in general (as opposed to in specific scenarios, like AGIs with explicit utility functions or the scenario in your digital people post). And relatedly, I’m pretty uncertain about how far away technological completion is—I can imagine transitions to post-human futures in this century which still leave a huge amount of room for progress in subsequent centuries.
I agree that ’extraordinarily important century” and “transformative century” don’t have the same emotional impact as “most important century”. I wonder if you could help address this by clarifying that you’re talking about “more change this century than since X” (for x = a millennium ago, or since agriculture, or since cavemen, or since we diverged from chimpanzees). “Change” also seems like a slightly more intuitive unit than “importance”, especially for non-EAs for whom “importance” is less strongly associated with “our ability to exert influence”.
Agreed that we probably disagree about lock-in. I don’t want my whole case to ride on it, but I don’t want it to be left out as an important possibility either.
With that in mind, I think the page I linked is conveying the details of what I mean pretty well (although I also find the “more change than X” framing interesting), and I think “most important century” is still the best headline version I’ve thought of.
Strongly agree—the final paragraph rubbed me the wrong way because it equated “the most important century” with “people needing to take action to save the world”