Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.
Could there arise “evil” mirror-touch synaesthetes? In one sense, no. You can’t go around wantonly hurting other sentient beings if you feel their pain as your own. Full-spectrum intelligence is friendly intelligence. But in another sense yes, insofar as primitive mirror-touch synaesthetes are prey to species-specific cognitive limitations that prevent them acting rationally to maximise the well-being of all sentience. Full-spectrum superintelligences would lack those computational limitations in virtue of their full cognitive competence in understanding both the subjective and the formal properties of mind. Perhaps full-spectrum superintelligences might optimise your matter and energy into a blissful smart angel; but they couldn’t wantonly hurt you, whether by neglect or design.
Interesting. Yes I guess such “full-spectrum superintelligence” might well be good by default, but the main worry from the perspective of the Yudkowsky/Bostrom paradigm is not this—perhaps it’s better described as super-optimisation, or super-capability (i.e. a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals).
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains). Or maybe quantum computers would be enough?
Pearce calls it “full-spectrum” to emphasise the difference w/ Bostrom’s “Super-Watson” (using Pearce’s words).
… a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals …
Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won’t notice or won’t be able to stop.
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains).
Actually, if I remember correctly, Pearce thinks that if “full-spectrum superintelligence” is going to emerge, it’s most likely to be biological, and even post-human (ie it is human descendants who will poses such super minds, not (purely?) silicon-based machines). Pearce sometimes calls this “biotechnological singularity”, or “BioSingularity” for short, analogously to Kurzweil’s “technological singularity”. One can read more about this in Pearce’s The Biointelligence Explosion (or in this “extended abstract”).
> … perhaps they should be deliberately aimed for?
David Pearce might argue for this if he thought that a “superintelligent” unconscious AGI (implemented on a classical digital computer) were feasible. E.g. from his The Biointelligence Explosion:
Interesting. Yes I guess such “full-spectrum superintelligence” might well be good by default, but the main worry from the perspective of the Yudkowsky/Bostrom paradigm is not this—perhaps it’s better described as super-optimisation, or super-capability (i.e. a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals).
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains). Or maybe quantum computers would be enough?
Pearce calls it “full-spectrum” to emphasise the difference w/ Bostrom’s “Super-Watson” (using Pearce’s words).
Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won’t notice or won’t be able to stop.
Actually, if I remember correctly, Pearce thinks that if “full-spectrum superintelligence” is going to emerge, it’s most likely to be biological, and even post-human (ie it is human descendants who will poses such super minds, not (purely?) silicon-based machines). Pearce sometimes calls this “biotechnological singularity”, or “BioSingularity” for short, analogously to Kurzweil’s “technological singularity”. One can read more about this in Pearce’s The Biointelligence Explosion (or in this “extended abstract”).