In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime. Which raises a few questions, like “Is it worth it?” and “How sure can we be that this really is a stress period?”. The thought that we especially are in a position to trade our happiness for enormous gains for society—while not impossible—is dangerous in that it’s very appealing, regardless whether it’s true or not.
The thought that we especially are in a position to trade our happiness for enormous gains for society [...] is dangerous in that it’s very appealing,
I’m not denying that what you say is true, but on the face of it, “the appeal of this ideology is that you have to sacrifice a lot for others’ gain” is not an intuitively compelling message.
In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.
In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime. Which raises a few questions, like “Is it worth it?” and “How sure can we be that this really is a stress period?”. The thought that we especially are in a position to trade our happiness for enormous gains for society—while not impossible—is dangerous in that it’s very appealing, regardless whether it’s true or not.
I’m not denying that what you say is true, but on the face of it, “the appeal of this ideology is that you have to sacrifice a lot for others’ gain” is not an intuitively compelling message.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.