If AI systems replace humanity, that outcome would undoubtedly be an absolute disaster for the eight billion human beings currently alive on Earth. However, it would be a localized, short-term disaster rather than an astronomical one. Bostrom’s argument, strictly interpreted, no longer applies to this situation. The reason is that the risk is confined to the present generation of humans: the question at stake is simply whether the eight billion people alive today will be killed or allowed to continue living. Even if you accept that killing eight billion people would be an extraordinarily terrible outcome, it does not automatically follow that this harm carries the same moral weight as a catastrophe that permanently eliminates the possibility of 10^23 future lives.
This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion)). Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower. But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower.
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.
This only holds if the future value in the universe of AIs that took over is almost exactly the same as the future value if humans remained in control (meaning varying less than one part in a billion (and I think less than one part in a billion billion billion billion billion billion)). Some people argue that the value of the universe would be higher if AIs took over, and the vast majority of people argue that it would be lower. But it is extremely unlikely to have exactly the same value. Therefore, in all likelihood, whether AI takes over or not does have long-term and enormous implications.
Hi David!
Would the dinossaurs have argued their extinction would be bad, although it may well have contributed to the emergence of mammals and ultimately humans? Would the vast majority of non-human primates have argued that humans taking over would be bad?
Why? It could be that future value is not exactly the same if AI takes over or not by a given date, but that the longer difference in value is negligible.