By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.