You break down a āgrand strategy for humanityā into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.
But do you worry that we might not get a chance for a long reflection before having to ālock inā certain things to reach existential security?
For example, perhaps to reach existential security given a vulnerable world, we put in place āgreatly amplified capacities for preventive policing and global governanceā (Bostrom), and this somehow prevents a long reflectionāeither through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.
(Iāve only started the book; this may well be addressed there already.)
I had a similar question myself. It seems like believing in a ālong reflectionā period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflectionāand whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be ālong.ā I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what Iāve read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.
You break down a āgrand strategy for humanityā into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.
But do you worry that we might not get a chance for a long reflection before having to ālock inā certain things to reach existential security?
For example, perhaps to reach existential security given a vulnerable world, we put in place āgreatly amplified capacities for preventive policing and global governanceā (Bostrom), and this somehow prevents a long reflectionāeither through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.
(Iāve only started the book; this may well be addressed there already.)
I had a similar question myself. It seems like believing in a ālong reflectionā period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflectionāand whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be ālong.ā I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what Iāve read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.