“I don’t think that the Spanish flu made us more prepared against Covid-19” actually I’m betting our response to Covid-19 was better than it would have been without having had major pandemics in the past. For example, the response involved developing effective vaccines very quickly
Simon
Would be good more generally to have an updating record of the most important AI safety papers of each year
Great post! A few reactions:
1. With space colonization, we can hopefully create causally isolated civilizations. Once this happens, the risk of a civilizational collapse falls dramatically, because of independence.
2. There are two different kinds of catastrophic risk: chancy, and merely uncertain. Compare flipping a fair coin (chancy) to flipping a coin that is either double headed or double tailed, but you don’t know which (merely uncertain). If alignment is merely uncertain, then conditional on solving it once, we are in the double-headed case, and we will solve it again. Alignment might be like this: for example, one picture is that alignment might be brute forceable with enough data, but we just don’t know whether this is so. At any rate, merely uncertain catastrophic risks do not have rerun risk, while chancy ones do.
3. I’m a bit skeptical of demographic decline as a catastrophic risk, because of evolutionary pressure. If some groups stop reproducing, groups with high reproduction rates will tend to replace them.
4. Regarding unipolar outcomes, you’re suggesting a picture where unipolar outcomes have less catastrophic risk, but more lock-in risk. I’m unsure of this. First, unipolar world government might have higher risk of civil unrest. In particular, you might think that elites tend to treat residents better because of fear of external threats; without that threat, they may exploit residents more, leading to higher civil unrest. Second, unipolar AI outcomes may have higher risk of going rogue than multipolar, because in multipolar outcomes, humans may have extra value to AIs as a partner in competition against other AIs.
Great post, here are three reactions:
1. Markets as viatopia.
You say that “Without a positive vision, we risk defaulting to whatever emerges from market and geopolitical dynamics, with little reason to think that the result will be anywhere close to as good as it could be.”
In fact, I think markets are potentially the best pathway for viatopia. The fundamental theorems of welfare economics suggest that without market failures, market dynamics will produce outcomes that maximize social welfare. One route to the best superintelligent future is simply to avoid market concentration (more than one superintelligence), and have governments impose Pigouvian taxes on externalities, the end. Then all of the interesting work is in designing anti-trust and tax measures that are robust to superintelligence. No easy task, but at least it is well-defined. Here, one more general question is whether superintelligence would destroy any of the fundamental conditions that allow markets to maximize social welfare.
I think of this market-based approach as a version of viatopia as opposed to “utopia” in the narrow sense you defined. The point of markets is that rather than trying to define and control the best future directly, we allow that future to emerge through the market.
2. Evolutionary mechanisms for viatopia.
Your post doesn’t emphasize evolutionary perspectives on viatopia. One strategy could be to ensure diversity across civilizational types, along with mutation mechanisms, and try to create selection mechanisms which correlate fitness with welfare. In that case, we can expect welfare to increase in the long run.
3. To promote viatopia, you focus on a series of societal primary goods. But these primary goods are plausibly also the focus of protopian efforts. What are some examples where protopianism and viatopianism recommend different actions?