You’re completely correct about a couple of things, and not only am I not disputing them, they are crucial to my argument: first, that I am only focusing on only one side of the distribution, and the second, that the scenarios I am referring to (with WW2 counterfactual or nuclear war) are improbable.
Indeed, as I have said, even if the probability of the future scenarios I am positing is of the order of 0.00001 (which makes it improbable), that can hardly be the grounds to dismiss the argument in this context simply because longtermism appeals precisely to the immense consequences of events whose absolute probability is very low.
At the risk of quoting out of context:
If we increase the odds of survival at one of the filters by one in a million, we can multiply one of the inputs for C by 1.000001.
So our new value of C is 0.01 x 0.01 x 1.000001 = 0.0001000001
New expected time remaining for civilization = M x C = 10,000,010,000
In much the same way, it’s absolutely correct that I am referring to one side of the distribution ; however it is not because the other-side does not exist or is not relevant bur rather because I want to highlight the magnitude of uncertainty and how that expands with time.
It follows also that I am in no way disputing (and my argument is somewhat orthogonal to) the different counterfactuals for WW2 you’ve outlined.
Several good points made by Linch, Aryeh and steve2512.
As for making my skepticism more precise in terms of probability, it’s less about me having a clear sense of timeline predictions that are radically different from those who believe that AGI will explode upon us in the next few decades, and more about the fact that I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing.
For instance, having studied and used state-of-the-art deep learning models, I am simply not able to understand why we are significantly closer to AGI today than we were in 1950s. General intelligence requires something qualitatively different from GPT-3 or Alpha Go, and I have seen literally zero evidence that any AI systems comprehend things even remotely close how humans operate.
Note that the last point is not a requirement (namely that AI should understand objects, events and relations like humans do) as such for AGI but it does make me skeptical of people who cite these examples as evidence of progress we’ve made towards such a general intelligence.
I have looked at Holden’s post and there are several things that are not clear to me. Here is one: there appears to be a lot of focus on the number of computations, especially in comparison to the human brain, and while I have little doubt that artificial systems would surpass those limitations (if it has already not done so), the real question is decoding the nature of wiring and the functional form of the relation between the inputs and outputs. Perhaps there is something I am not getting here but (at least in principle) isn’t there an infinite degree of freedom associated with a continuous function? Even if one argued that we can define equivalence class of similar functions (made rigorous), does that still not leave us with an extremely large number of possibilities?