An ‘option value’ argument assumes that (a) the AI wouldn’t take that uncertainty into account and (b) the AI wouldn’t be able to recreate humanity at some later point if it decided that this was in fact the correct maximisation course. Even if it set us back by fully 10,000 years (very roughly the time from the dawn of civilisation up to now) it wouldn’t be obviously that bad in the long run. Indeed, for all we know this could have already happened...
In other words, in the context of an ultra-powerful ultra-well-resourced ultra-smart AI, there are few things in this world that are truly irreversible, and I see little need to give special ‘option value’ to humanity’s or even civilisation’s existence.
Agree with the rest of your post re. rhetoric, and that’s generally what I’ve assumed is going on here when this has puzzled me also.
Agree with this. I was being a bit vague about what the option value was, but I was thinking of something like the value of not locking in a value set that on reflection we would disagree with. I think this covers some but not all of the scenarios Rhys was discussing.
An ‘option value’ argument assumes that (a) the AI wouldn’t take that uncertainty into account and (b) the AI wouldn’t be able to recreate humanity at some later point if it decided that this was in fact the correct maximisation course. Even if it set us back by fully 10,000 years (very roughly the time from the dawn of civilisation up to now) it wouldn’t be obviously that bad in the long run. Indeed, for all we know this could have already happened...
In other words, in the context of an ultra-powerful ultra-well-resourced ultra-smart AI, there are few things in this world that are truly irreversible, and I see little need to give special ‘option value’ to humanity’s or even civilisation’s existence.
Agree with the rest of your post re. rhetoric, and that’s generally what I’ve assumed is going on here when this has puzzled me also.
Agree with this. I was being a bit vague about what the option value was, but I was thinking of something like the value of not locking in a value set that on reflection we would disagree with. I think this covers some but not all of the scenarios Rhys was discussing.