You write: “In this discussion, there are two considerations that might at first have ap-
peared to be crucial, but turn out to look less important. The first such consid-
eration is whether existence is in general good or bad, `a la Benatar (2008). If
existence really should turn out to be a harm, sufficiently unbiased descendants
would plausibly be able to end it. This is the option value argument. In turn,
option value itself might appear to be a decisive argument against doing some-
thing so irreversible as ending humanity: we should temporise, and delegate
this decision to our descendants. But not everyone enjoys option value, and
those who suffer are relatively less likely to do so. If our descendants are selfish,
and find it advantageous to allow the suffering of powerless beings, we may not
wish to give them option value. If our descendants are altruistic, we do want
civilisation to continue, but for reasons that are more general than option value.”
Since the option value argument is not very strong, it seems to be a very important consideration “whether existence in general is good or bad”—or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don’t actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high—i.e. a lot of things in a life have to go right in order to make the life worth living—then most powerless beings will probably have bad lives, possibly rendering overall utility negative.
You write: “In this discussion, there are two considerations that might at first have ap- peared to be crucial, but turn out to look less important. The first such consid- eration is whether existence is in general good or bad, `a la Benatar (2008). If existence really should turn out to be a harm, sufficiently unbiased descendants would plausibly be able to end it. This is the option value argument. In turn, option value itself might appear to be a decisive argument against doing some- thing so irreversible as ending humanity: we should temporise, and delegate this decision to our descendants. But not everyone enjoys option value, and those who suffer are relatively less likely to do so. If our descendants are selfish, and find it advantageous to allow the suffering of powerless beings, we may not wish to give them option value. If our descendants are altruistic, we do want civilisation to continue, but for reasons that are more general than option value.”
Since the option value argument is not very strong, it seems to be a very important consideration “whether existence in general is good or bad”—or, less dichotomous, where the threshold for a life worth living lies. Space colonization means more (sentient) beings. If our descendants are altruistic (or have values that we, upon reflection, would endorse), everything is fine anyway. If our descendants are selfish, and the threshold for a life worth living is fairly low, then not much harm will be done (as long as they don’t actively value causing harm, which seems unlikely). If they are selfish and the threshold is fairly high—i.e. a lot of things in a life have to go right in order to make the life worth living—then most powerless beings will probably have bad lives, possibly rendering overall utility negative.