I think even on EA’s own terms (apart from any effects from EA being fringe) there’s a good reason for EAs to be OK with being more stressed and unhappy than people with other philosophies.
On the scale of human history we’re likely in an emergency situation when we have an opportunity to trade off the happiness of EAs for enormous gains in total well-being. Similar to how during a bear attack you’d accept that you won’t feel relaxed and happy while you try to mitigate the attack, but this period of stress is worth it overall. This is especially true if you believe we’re in the hinge of history.
Eh this logic can be used to justify a lot of extreme action in the name of progress. Communists and Marxists have had a lot of thoughts about the “hinge of history” and used that to unleash terrible destruction on the rest of humanity.
In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime. Which raises a few questions, like “Is it worth it?” and “How sure can we be that this really is a stress period?”. The thought that we especially are in a position to trade our happiness for enormous gains for society—while not impossible—is dangerous in that it’s very appealing, regardless whether it’s true or not.
The thought that we especially are in a position to trade our happiness for enormous gains for society [...] is dangerous in that it’s very appealing,
I’m not denying that what you say is true, but on the face of it, “the appeal of this ideology is that you have to sacrifice a lot for others’ gain” is not an intuitively compelling message.
In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.
I think even on EA’s own terms (apart from any effects from EA being fringe) there’s a good reason for EAs to be OK with being more stressed and unhappy than people with other philosophies.
On the scale of human history we’re likely in an emergency situation when we have an opportunity to trade off the happiness of EAs for enormous gains in total well-being. Similar to how during a bear attack you’d accept that you won’t feel relaxed and happy while you try to mitigate the attack, but this period of stress is worth it overall. This is especially true if you believe we’re in the hinge of history.
Eh this logic can be used to justify a lot of extreme action in the name of progress. Communists and Marxists have had a lot of thoughts about the “hinge of history” and used that to unleash terrible destruction on the rest of humanity.
In contrast to a bear attack, you don’t expect to know that the “period of stress” has ended during your lifetime. Which raises a few questions, like “Is it worth it?” and “How sure can we be that this really is a stress period?”. The thought that we especially are in a position to trade our happiness for enormous gains for society—while not impossible—is dangerous in that it’s very appealing, regardless whether it’s true or not.
I’m not denying that what you say is true, but on the face of it, “the appeal of this ideology is that you have to sacrifice a lot for others’ gain” is not an intuitively compelling message.
I expect to know this. Either AI will go well and we’ll get the glorious transhuman future, or it’ll go poorly and we’ll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).
By many numbers AI risk being solved would only reduce total probability of X-risk by 1⁄3, 2⁄3, or maybe 9⁄10 if you are very heavy on AI-risk probability.
Personally I think humanity’s “period of stress” will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be “burning” for quite some time.
Here’s a common belief in these circles (which I share):
If AI risk is solved through means other than “we collectively coordinate to not build TAI”(a solution which I think is unlikely both because that level of global coordination is very hard and because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that’s out of human hands.