In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.