(Half baked and maybe just straight up incorrect about people’s orientations)
I worry a bit about groups thinking about the post-AGI future (e.g., Forethought) will not want to push for something like super-optimized flourishing because this will seem weird and possibly uncooperative with factions that don’t like the vibe of super-optimization. This might happen even if these groups thinking about the future do believe in their hearts that super-optimized flourishing is the best outcome.
It is very plausible to me that the situation is “convex” in the sense that it is better for the super-optimizers to optimize fully with their share of the universe, while the other groups do what they want with their share (with rules to prevent extreme suffering, pessimization etc). I think this approach might be better for all groups, rather than aiming for a more universal middle ground that leaves everyone disappointed. This bad middle ground might look like a universe that is both not very optimized for flourishing but is still super weird and unfamiliar.
It would be very sad if we miss out on the optimized flourishing because we were trying to not seem weird or uncooperative.
Two hours before you posted this, MacAskill posted a brief explanation of viatopianism.
This essay is the first in a series that discusses what a good north star [for post-superintelligence society] might be. I begin by describing a concept that I find helpful in this regard:
Viatopia: an intermediate state of society that is on track for a near-best future, whatever that might look like.
Viatopia is a waystation rather than a final destination; etymologically, it means “by way of this place”. We can often describe good waystations even if we have little idea what the ultimate destination should be. A teenager might have little idea what they want to do with their life, but know that a good education will keep their options open. Adventurers lost in the wilderness might not know where they should ultimately be going, but still know they should move to higher ground where they can survey the terrain. Similarly, we can identify what puts humanity in a good position to navigate towards excellent futures, even if we don’t yet know exactly what those futures look like.
In the past, Toby Ord and I have promoted the related idea of the “long reflection”: a stable state of the world where we are safe from calamity, and where we reflect on and debate the nature of the good life, working out what the most flourishing society would be. Viatopia is a more general concept: the long reflection is one proposal for what viatopia would look like, but it need not be the only one.
I think that some sufficiently-specified conception of viatopia should act as our north star during the transition to superintelligence. In later essays I’ll discuss what viatopia, concretely, might look like; this note will just focus on explaining the concept.
. . .
Unlike utopianism, it cautions against the idea of having some ultimate end-state in mind. Unlike protopianism, it attempts to offer a vision for where society should be going. It focuses on achieving whatever society needs to be able to steer itself towards a truly wonderful outcome.
I think I’m largely on board. I think I’d favor doing some amount of utopian planning (aiming for something like hedonium and acausal trade). Viatopia sounds less weird than utopias like that. I wouldn’t be shocked if Forethought talked relatively more about viatopia because it sounds less weird. I would be shocked if they push us in the direction of anodyne final outcomes. I agree with Peter that stuff is “convex” but I don’t worry that Forethought will have us tile the universe with compromisium. But I don’t have much private info.
Yeah, agreed on that point. Folks at Forethought aren’t necessarily thinking about what a near-optimal future should look like, they’re thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future.
Actually Jordan, better than “pretty ok” futures is explicitly something that folks at Forethought have been thinking about. Just not in the Viatopia piece.
Check this out: https://www.forethought.org/research/better-futures
Hey again, sorry to spam you as I just commented on another piece of yours but am really vibing with your content!
I’m really hoping we can get something like this, I’ve been calling this “existential compromise.”
I worry that it may be difficult to get humanity to agree that we should use even a small fraction of future the resources optimally (see my research on this here), as I agree it seems like it will be a very weird[1] thing that is optimal.
I think a compromise like this, with things split between optimal use and more (trans-) human friendly world makes a lot of sense and could perhaps be achieved if we can get society onto a good viatopian path, I’ve described what I hope that could look like here.
I would also say that I think Will Macaskill’s “grand bargain” and suggestions that we should try to aim for compromise in order to achieve “the good de dicto” feels to me like he actually is arguing that we need to aim for what is in fact best with significant fraction of future resources.
Nick Bostrom has also argued (page 14) that it should be possible for humans to get almost all of what they want and yet most resources could be optimized for “super-beneficiaries”.
Essentially I think we need to get whatever process[2] is used to decide what we do with future resources to be extremely careful and thorough. I think we should target that process specifically, but it might also be good to broadly try to differentially accelerate society‘s wisdom (perhaps with AI).
Additionally, it could be really good to delay or slow things down so that we have more time to realize the existential stakes giving us the time that it takes to collectively mature and wisen.
Recently Will Macaskill’s research debated whether or not the best possible use of resources is something very extreme, calling this thesis “extremity of the best” (EOTB). I think this is very likely to be the case.
I think this includes things like AGI/superintelligence governance, global constitutional conventions, and obviously any kind of long reflection or coherent extrapolated volition that we collectively decide to undergo.
Speculatively, I think there could actually just be convergence here, though, once you account for moral uncertainty and just very plausible situations where doing bad by everyone’s lights are as bad as, say, utilitarian nightmares but just easier to get others on board for (ie extreme power).
(Half baked and maybe just straight up incorrect about people’s orientations)
I worry a bit about groups thinking about the post-AGI future (e.g., Forethought) will not want to push for something like super-optimized flourishing because this will seem weird and possibly uncooperative with factions that don’t like the vibe of super-optimization. This might happen even if these groups thinking about the future do believe in their hearts that super-optimized flourishing is the best outcome.
It is very plausible to me that the situation is “convex” in the sense that it is better for the super-optimizers to optimize fully with their share of the universe, while the other groups do what they want with their share (with rules to prevent extreme suffering, pessimization etc). I think this approach might be better for all groups, rather than aiming for a more universal middle ground that leaves everyone disappointed. This bad middle ground might look like a universe that is both not very optimized for flourishing but is still super weird and unfamiliar.
It would be very sad if we miss out on the optimized flourishing because we were trying to not seem weird or uncooperative.
Two hours before you posted this, MacAskill posted a brief explanation of viatopianism.
I think I’m largely on board. I think I’d favor doing some amount of utopian planning (aiming for something like hedonium and acausal trade). Viatopia sounds less weird than utopias like that. I wouldn’t be shocked if Forethought talked relatively more about viatopia because it sounds less weird. I would be shocked if they push us in the direction of anodyne final outcomes. I agree with Peter that stuff is “convex” but I don’t worry that Forethought will have us tile the universe with compromisium. But I don’t have much private info.
I should read that piece. In general, I am very into the Long Reflection and I guess also the Viatopia stuff.
Yeah, agreed on that point. Folks at Forethought aren’t necessarily thinking about what a near-optimal future should look like, they’re thinking about how to get civilisation to a point where we can make the best possible decisions about what to do with the long-term future.
Actually Jordan, better than “pretty ok” futures is explicitly something that folks at Forethought have been thinking about. Just not in the Viatopia piece. Check this out: https://www.forethought.org/research/better-futures
Hey again, sorry to spam you as I just commented on another piece of yours but am really vibing with your content!
I’m really hoping we can get something like this, I’ve been calling this “existential compromise.”
I worry that it may be difficult to get humanity to agree that we should use even a small fraction of future the resources optimally (see my research on this here), as I agree it seems like it will be a very weird[1] thing that is optimal.
I think a compromise like this, with things split between optimal use and more (trans-) human friendly world makes a lot of sense and could perhaps be achieved if we can get society onto a good viatopian path, I’ve described what I hope that could look like here.
I would also say that I think Will Macaskill’s “grand bargain” and suggestions that we should try to aim for compromise in order to achieve “the good de dicto” feels to me like he actually is arguing that we need to aim for what is in fact best with significant fraction of future resources.
Nick Bostrom has also argued (page 14) that it should be possible for humans to get almost all of what they want and yet most resources could be optimized for “super-beneficiaries”.
Essentially I think we need to get whatever process[2] is used to decide what we do with future resources to be extremely careful and thorough. I think we should target that process specifically, but it might also be good to broadly try to differentially accelerate society‘s wisdom (perhaps with AI).
Additionally, it could be really good to delay or slow things down so that we have more time to realize the existential stakes giving us the time that it takes to collectively mature and wisen.
Recently Will Macaskill’s research debated whether or not the best possible use of resources is something very extreme, calling this thesis “extremity of the best” (EOTB). I think this is very likely to be the case.
I think this includes things like AGI/superintelligence governance, global constitutional conventions, and obviously any kind of long reflection or coherent extrapolated volition that we collectively decide to undergo.
Hmm this is interesting.
Speculatively, I think there could actually just be convergence here, though, once you account for moral uncertainty and just very plausible situations where doing bad by everyone’s lights are as bad as, say, utilitarian nightmares but just easier to get others on board for (ie extreme power).