I’m not sure I get why “Singletons about non-life-maximizing values are also convergent”, though.
Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be “Singletons about non-life-maximizing values could also be convergent.” I think that if some technologically advanced species doesn’t go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely—not very confident in this, though, and I think #2 is the weakest argument. Bostrom’s “The Future of Human Evolution” touches on similar points.
Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be “Singletons about non-life-maximizing values could also be convergent.” I think that if some technologically advanced species doesn’t go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely—not very confident in this, though, and I think #2 is the weakest argument. Bostrom’s “The Future of Human Evolution” touches on similar points.