Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.