While Hanson is correct that the Long Reflection is rather dystopian, his alternatives are worse, and his “Age of EM” gives plenty of examples from a hypothetical society that is more dystopian than the “Long Reflection”.
Hanson’s scenario of “a very real possibility that this regime could continue forever” is certainly worrying, but I view it as an improvement over certain alternatives, namely, AGI destroying humanity, severe values-drift from unconstrained whole brain emulation + editing + economic pressures, and resultant S-risk type scenarios.
So I don’t agree that a Long Reflection is worse than those alternatives.
More speculatively, I think he understates how prevalent status-quo bias is, here:
“the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. ”
Most of human history is extremely static relative to history post the Industrial Revolution, so humanity seems well-adapted to a static society. There are substantial political movements dedicated to slowing or reversing change. The average human attitude towards change is not uniformly positive!
I don’t think Hanson would disagree with this claim (that the future is more likely to be better by current values, given the long reflection, compared to e.g. Age of Em). I think it’s a fundamental values difference.
Robin Hanson is an interesting and original thinker, but not only is he not an effective altruist, he explicitly doesn’t want to make the future go well according to anything like present human values.
The Age of Em, which Hanson clearly doesn’t think is an undesirable future, would contain very little of what we value. Hanson says this, but it’s a feature, not a bug. Scott Alexander:
Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”
Since Hanson doesn’t have a strong interest in steering the long-term future to be good by current values, it’s obvious why he wouldn’t be a fan of an idea like the long reflection, which has that as its main goal but produces bad side effects in the course of giving us a chance of achieving that goal. It’s just a values difference.
Afaict there is a difference between the Long Reflection and Hanson’s discussion about brain emulations, in that Hanson focuses more on prediction, whereas the debate on the Long Reflection is more normative (ought it to happen?).
If Hanson thinks WBE and his resultant predictions are likely barring some external event or radical change, and also doesn’t favor a Long Reflection, isn’t that equivalent to saying his scenario is more desirable than the Long Reflection?
As you laid out in the post, your biggest concern about the long reflection is the likely outcome of a pause—is that roughly correct?
In other words, I understand your preferences are roughly: Extinction < Eternal Long Reflection < Unconstrained Age of Em < Century-long reflection followed by Constrained Age of Em < No reflection + Constrained Age of Em
(As an aside, I would assume that without changing the preference order, we could replace unconstrained versus constrained Age of Em with, say, indefinite robust totalitarianism versus “traditional” transhumanist future.)
I don’t have great confidence that the kinds of constraints that would be imposed on an age of em after a long reflection would actually improve that and further ages.
Yes, you’ve mentioned your skepticism of the efficacy of a long reflection, but conditional on it successfully reducing bad outcomes, you agree with the ordering?
The long reflection as I remember it doesn’t have much to do with AGI destroying humanity, since AGI is something that on most timelines we expect to have resolved within the next century or two, whereas the long reflection was something Toby envisaged taking multiple centuries. The same probably applies to whole brain emulation.
This seems like quite an important problem for the long reflection case—it may be so slow a scenario that none of its conclusions will matter.
While Hanson is correct that the Long Reflection is rather dystopian, his alternatives are worse, and his “Age of EM” gives plenty of examples from a hypothetical society that is more dystopian than the “Long Reflection”.
Hanson’s scenario of “a very real possibility that this regime could continue forever” is certainly worrying, but I view it as an improvement over certain alternatives, namely, AGI destroying humanity, severe values-drift from unconstrained whole brain emulation + editing + economic pressures, and resultant S-risk type scenarios.
So I don’t agree that a Long Reflection is worse than those alternatives.
More speculatively, I think he understates how prevalent status-quo bias is, here:
“the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. ”
Most of human history is extremely static relative to history post the Industrial Revolution, so humanity seems well-adapted to a static society. There are substantial political movements dedicated to slowing or reversing change. The average human attitude towards change is not uniformly positive!
I don’t think Hanson would disagree with this claim (that the future is more likely to be better by current values, given the long reflection, compared to e.g. Age of Em). I think it’s a fundamental values difference.
Robin Hanson is an interesting and original thinker, but not only is he not an effective altruist, he explicitly doesn’t want to make the future go well according to anything like present human values.
The Age of Em, which Hanson clearly doesn’t think is an undesirable future, would contain very little of what we value. Hanson says this, but it’s a feature, not a bug. Scott Alexander:
Since Hanson doesn’t have a strong interest in steering the long-term future to be good by current values, it’s obvious why he wouldn’t be a fan of an idea like the long reflection, which has that as its main goal but produces bad side effects in the course of giving us a chance of achieving that goal. It’s just a values difference.
I have values, and The Age of Em overall contains a great deal that I value, and in fact probably more of what I value than does our world today.
Afaict there is a difference between the Long Reflection and Hanson’s discussion about brain emulations, in that Hanson focuses more on prediction, whereas the debate on the Long Reflection is more normative (ought it to happen?).
If Hanson thinks WBE and his resultant predictions are likely barring some external event or radical change, and also doesn’t favor a Long Reflection, isn’t that equivalent to saying his scenario is more desirable than the Long Reflection?
I see an unconstrained Age of Em as better than an eternal long reflection.
As you laid out in the post, your biggest concern about the long reflection is the likely outcome of a pause—is that roughly correct?
In other words, I understand your preferences are roughly:
Extinction < Eternal Long Reflection < Unconstrained Age of Em < Century-long reflection followed by Constrained Age of Em < No reflection + Constrained Age of Em
(As an aside, I would assume that without changing the preference order, we could replace unconstrained versus constrained Age of Em with, say, indefinite robust totalitarianism versus “traditional” transhumanist future.)
I don’t have great confidence that the kinds of constraints that would be imposed on an age of em after a long reflection would actually improve that and further ages.
Yes, you’ve mentioned your skepticism of the efficacy of a long reflection, but conditional on it successfully reducing bad outcomes, you agree with the ordering?
You ’ll also need to add increasing good outcomes, along with decreasing bad outcomes.
The long reflection as I remember it doesn’t have much to do with AGI destroying humanity, since AGI is something that on most timelines we expect to have resolved within the next century or two, whereas the long reflection was something Toby envisaged taking multiple centuries. The same probably applies to whole brain emulation.
This seems like quite an important problem for the long reflection case—it may be so slow a scenario that none of its conclusions will matter.