Are you saying that e.g. a war between China and Taiwan makes it impossible to build AGI? Or that serial time requirements make AGI impossible? Or that scaling chips means AGI is impossible?
C’mon Paul—please extend some principle of charity here. :)
You have repeatedly ascribed silly, impossible beliefs to us and I don’t know why (to be fair, in this particular case you’re just asking, not ascribing). Genuinely, man, I feel bad that our writing has either (a) given the impression that we believe such things or (b) given the impression that we’re the type of people who’d believe such things.
Like, are these sincere questions? Is your mental model of us that there’s a genuine uncertainty over whether we’ll say “Yes, a war precludes AGI” vs “No, a war does preclude AGI.”
To make it clear: No, of course a war between China and Taiwan does not make it impossible to build AGI by 2043. As our essay explicitly says.
Some things can go wrong and you can still get AGI by 2043. If you want to argue you can’t build AGI if something goes wrong, that’s a whole different story. So multiplying probabilities (even conditional probabilities) for none of these things happening doesn’t seem right.
To make it clear: our forecasts are not the odds of wars, pandemics, and depressions not occurring. They are the odds of wars, pandemics, and depressions not delaying AGI beyond 2043. Most wars, most pandemics, and most depressions will not delay AGI beyond 2043, we think. Our methodology is to forecast only the most severe events, and then assume a good fraction won’t delay AGI. As our essay explicitly says.
We probably forecast higher odds of delay than you, because our low likelihoods of TAGI mean that TAGI, if developed, is likeliest to be developed nearer to the end of the period, without many years of slack. If TAGI is easy, and can be developed early or with plenty of slack, then it becomes much harder for these types of events to derail TAGI.
My point in asking “Are you assigning probabilities to a war making AGI impossible?” was to emphasize that I don’t understand what 70% is a probability of, or why you are multiplying these numbers. I’m sorry if the rhetorical question caused confusion.
My current understanding is that 0.7 is basically just the ratio (Probability of AGI before thinking explicitly about the prospect of war) / (Probability of AGI after thinking explicitly about prospect of war). This isn’t really a separate event from the others in the list, it’s just a consideration that lengthens timelines. It feels like it would also make sense to list other considerations that tend to shorten timelines.
(I do think disruptions and weird events tend to make technological progress slower rather than faster, though I also think they tend to pull tiny probabilities up by adding uncertainty.)
C’mon Paul—please extend some principle of charity here. :)
You have repeatedly ascribed silly, impossible beliefs to us and I don’t know why (to be fair, in this particular case you’re just asking, not ascribing). Genuinely, man, I feel bad that our writing has either (a) given the impression that we believe such things or (b) given the impression that we’re the type of people who’d believe such things.
Like, are these sincere questions? Is your mental model of us that there’s a genuine uncertainty over whether we’ll say “Yes, a war precludes AGI” vs “No, a war does preclude AGI.”
To make it clear: No, of course a war between China and Taiwan does not make it impossible to build AGI by 2043. As our essay explicitly says.
To make it clear: our forecasts are not the odds of wars, pandemics, and depressions not occurring. They are the odds of wars, pandemics, and depressions not delaying AGI beyond 2043. Most wars, most pandemics, and most depressions will not delay AGI beyond 2043, we think. Our methodology is to forecast only the most severe events, and then assume a good fraction won’t delay AGI. As our essay explicitly says.
We probably forecast higher odds of delay than you, because our low likelihoods of TAGI mean that TAGI, if developed, is likeliest to be developed nearer to the end of the period, without many years of slack. If TAGI is easy, and can be developed early or with plenty of slack, then it becomes much harder for these types of events to derail TAGI.
My point in asking “Are you assigning probabilities to a war making AGI impossible?” was to emphasize that I don’t understand what 70% is a probability of, or why you are multiplying these numbers. I’m sorry if the rhetorical question caused confusion.
My current understanding is that 0.7 is basically just the ratio (Probability of AGI before thinking explicitly about the prospect of war) / (Probability of AGI after thinking explicitly about prospect of war). This isn’t really a separate event from the others in the list, it’s just a consideration that lengthens timelines. It feels like it would also make sense to list other considerations that tend to shorten timelines.
(I do think disruptions and weird events tend to make technological progress slower rather than faster, though I also think they tend to pull tiny probabilities up by adding uncertainty.)