It seems to me like “transformative AI is coming this century” and “this century is the most important century” are very different claims which you tend to conflate in this sequence.
I agree they’re different claims; I’ve tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of “most important century.”
This post contains a few cases where I think the situation is somewhat confusing, because there are “burden of proof” arguments that take the basic form, “If this type of AI is developed, that will make it likely that it’s the most important century; there’s a burden of proof on arguing that it’s the most important century because ___.” So that does lead to some cases where I am defending “most important century” within a post on AI timelines.
More generally, I think that claims which depend on the specifics of our long-term trajectory after transformative AI are much easier to dismiss as being speculative (especially given how much pushback claims about reaching TAI already receive for being speculative). So I’d much rather people focus on the claim that “AI will be really, really big” than “AI will be bigger than anything else which comes afterwards”. But it seems like framing this sequence of posts as the “most important century” sequence pushes towards the latter.
I struggled a bit with this; you might find this page helpful, especially the final section, “Holistic intent of the ‘most important century’ phrase.” I ultimately decided that relative to where most readers are by default, “most important century” is conveying a more accurate high-level message than something like “extraordinarily important century”—the latter simply does not get across the strength of the claim—even though it’s true that “most important century” could end up being false while the overall spirit of the series (that this is a massively high-stakes situation) ends up being right.
I also think it’s the case that the odds of “most important century” being literally true are still decently high (though substantially lower than “transformative AI this century”). A key intuition behind this claim is the idea that PASTA could radically speed things up, such that this century ends up containing as much eventfulness as we’re used to from many centuries. (Some more along these lines in the section starting “To put this possibility in perspective, it’s worth noting that the world seems to have ‘sped up’” from the page linked above.)
Oh, also, depending on how you define “important”, it may be the case that past centuries were more important because they contained the best opportunities to influence TAI—e.g. when the west became dominant, or during WW1 and WW2, or the cold war. Again, that’s not very action-guiding, but it does make the “most important century” claim even more speculative.
I address this briefly in footnote 1 on the page linked above: “You could say that actions of past centuries also have had ripple effects that will influence this future. But I’d reply that the effects of these actions were highly chaotic and unpredictable, compared to the effects of actions closer-in-time to the point where the transition occurs.”
Thanks for the response, that all makes sense. I missed some of the parts where you disambiguated those two concepts; apologies for that. I suspect I still see the disparity between “extraordinarily important century” and “most important century” as greater than you do, though, perhaps because I consider value lock-in this century less likely than you do—I haven’t seen particularly persuasive arguments for it in general (as opposed to in specific scenarios, like AGIs with explicit utility functions or the scenario in your digital people post). And relatedly, I’m pretty uncertain about how far away technological completion is—I can imagine transitions to post-human futures in this century which still leave a huge amount of room for progress in subsequent centuries.
I agree that ’extraordinarily important century” and “transformative century” don’t have the same emotional impact as “most important century”. I wonder if you could help address this by clarifying that you’re talking about “more change this century than since X” (for x = a millennium ago, or since agriculture, or since cavemen, or since we diverged from chimpanzees). “Change” also seems like a slightly more intuitive unit than “importance”, especially for non-EAs for whom “importance” is less strongly associated with “our ability to exert influence”.
Agreed that we probably disagree about lock-in. I don’t want my whole case to ride on it, but I don’t want it to be left out as an important possibility either.
With that in mind, I think the page I linked is conveying the details of what I mean pretty well (although I also find the “more change than X” framing interesting), and I think “most important century” is still the best headline version I’ve thought of.
I agree they’re different claims; I’ve tried not to conflate them. For example, in this section I give different probabilities for transformative AI and two different interpretations of “most important century.”
This post contains a few cases where I think the situation is somewhat confusing, because there are “burden of proof” arguments that take the basic form, “If this type of AI is developed, that will make it likely that it’s the most important century; there’s a burden of proof on arguing that it’s the most important century because ___.” So that does lead to some cases where I am defending “most important century” within a post on AI timelines.
I struggled a bit with this; you might find this page helpful, especially the final section, “Holistic intent of the ‘most important century’ phrase.” I ultimately decided that relative to where most readers are by default, “most important century” is conveying a more accurate high-level message than something like “extraordinarily important century”—the latter simply does not get across the strength of the claim—even though it’s true that “most important century” could end up being false while the overall spirit of the series (that this is a massively high-stakes situation) ends up being right.
I also think it’s the case that the odds of “most important century” being literally true are still decently high (though substantially lower than “transformative AI this century”). A key intuition behind this claim is the idea that PASTA could radically speed things up, such that this century ends up containing as much eventfulness as we’re used to from many centuries. (Some more along these lines in the section starting “To put this possibility in perspective, it’s worth noting that the world seems to have ‘sped up’” from the page linked above.)
I address this briefly in footnote 1 on the page linked above: “You could say that actions of past centuries also have had ripple effects that will influence this future. But I’d reply that the effects of these actions were highly chaotic and unpredictable, compared to the effects of actions closer-in-time to the point where the transition occurs.”
Thanks for the response, that all makes sense. I missed some of the parts where you disambiguated those two concepts; apologies for that. I suspect I still see the disparity between “extraordinarily important century” and “most important century” as greater than you do, though, perhaps because I consider value lock-in this century less likely than you do—I haven’t seen particularly persuasive arguments for it in general (as opposed to in specific scenarios, like AGIs with explicit utility functions or the scenario in your digital people post). And relatedly, I’m pretty uncertain about how far away technological completion is—I can imagine transitions to post-human futures in this century which still leave a huge amount of room for progress in subsequent centuries.
I agree that ’extraordinarily important century” and “transformative century” don’t have the same emotional impact as “most important century”. I wonder if you could help address this by clarifying that you’re talking about “more change this century than since X” (for x = a millennium ago, or since agriculture, or since cavemen, or since we diverged from chimpanzees). “Change” also seems like a slightly more intuitive unit than “importance”, especially for non-EAs for whom “importance” is less strongly associated with “our ability to exert influence”.
Agreed that we probably disagree about lock-in. I don’t want my whole case to ride on it, but I don’t want it to be left out as an important possibility either.
With that in mind, I think the page I linked is conveying the details of what I mean pretty well (although I also find the “more change than X” framing interesting), and I think “most important century” is still the best headline version I’ve thought of.