It’s great to have these quotes all in one place. :)
In addition to the main point you made—that the futures containing the most suffering are often the ones that it’s too late to stop—I would also argue that even reflective, human-controlled futures could be pretty terrible because a lot of humans have (by my lights) some horrifying values. For example, human-controlled futures might accept enormous s-risks for the sake of enormous positive value, might endorse strong norms of retribution, might severely punish outgroups or heterodoxy, might value giving agents free will more than preventing harm (cf. the “free will theodicy”), and so on.
The option-value argument works best when I specifically am the one whose options are being kept open (although even in this case there can be concerns about losing my ideals, becoming selfish, being corrupted by other influences, etc). But humanity as a whole is a very different agent from myself, and I don’t trust humanity to make the same choices I would; often the exact opposite.
If paperclip maximizers wait to tile the universe with paperclips because they want to first engage in a Long Reflection to figure out if those paperclips should be green or blue, or whether they should instead be making staples, this isn’t exactly reassuring.
It’s great to have these quotes all in one place. :)
In addition to the main point you made—that the futures containing the most suffering are often the ones that it’s too late to stop—I would also argue that even reflective, human-controlled futures could be pretty terrible because a lot of humans have (by my lights) some horrifying values. For example, human-controlled futures might accept enormous s-risks for the sake of enormous positive value, might endorse strong norms of retribution, might severely punish outgroups or heterodoxy, might value giving agents free will more than preventing harm (cf. the “free will theodicy”), and so on.
The option-value argument works best when I specifically am the one whose options are being kept open (although even in this case there can be concerns about losing my ideals, becoming selfish, being corrupted by other influences, etc). But humanity as a whole is a very different agent from myself, and I don’t trust humanity to make the same choices I would; often the exact opposite.
If paperclip maximizers wait to tile the universe with paperclips because they want to first engage in a Long Reflection to figure out if those paperclips should be green or blue, or whether they should instead be making staples, this isn’t exactly reassuring.