I completely agree with 3 and it’s indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion—especially when combined with scope sensitivity—might increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton’s preface of CLR’s agenda. A space filled with life-maximizing aliens who don’t give a crap about welfare might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts and stuff). Obviously, uncertainty stays huge here.
Besides, 1 and 2 seem to be good counter-considerations, thanks! :)
I’m not sure I get why “Singletons about non-life-maximizing values are also convergent”, though. Do you—or anyone else reading this—can point at any reference that would help me understand this?
I’m not sure I get why “Singletons about non-life-maximizing values are also convergent”, though.
Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be “Singletons about non-life-maximizing values could also be convergent.” I think that if some technologically advanced species doesn’t go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely—not very confident in this, though, and I think #2 is the weakest argument. Bostrom’s “The Future of Human Evolution” touches on similar points.
I completely agree with 3 and it’s indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion—especially when combined with scope sensitivity—might increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton’s preface of CLR’s agenda. A space filled with life-maximizing aliens who don’t give a crap about welfare might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts and stuff). Obviously, uncertainty stays huge here.
Besides, 1 and 2 seem to be good counter-considerations, thanks! :)
I’m not sure I get why “Singletons about non-life-maximizing values are also convergent”, though. Do you—or anyone else reading this—can point at any reference that would help me understand this?
Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be “Singletons about non-life-maximizing values could also be convergent.” I think that if some technologically advanced species doesn’t go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely—not very confident in this, though, and I think #2 is the weakest argument. Bostrom’s “The Future of Human Evolution” touches on similar points.