Interesting. Yes I guess such “full-spectrum superintelligence” might well be good by default, but the main worry from the perspective of the Yudkowsky/Bostrom paradigm is not this—perhaps it’s better described as super-optimisation, or super-capability (i.e. a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals).
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains). Or maybe quantum computers would be enough?
Pearce calls it “full-spectrum” to emphasise the difference w/ Bostrom’s “Super-Watson” (using Pearce’s words).
… a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals …
Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won’t notice or won’t be able to stop.
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains).
Actually, if I remember correctly, Pearce thinks that if “full-spectrum superintelligence” is going to emerge, it’s most likely to be biological, and even post-human (ie it is human descendants who will poses such super minds, not (purely?) silicon-based machines). Pearce sometimes calls this “biotechnological singularity”, or “BioSingularity” for short, analogously to Kurzweil’s “technological singularity”. One can read more about this in Pearce’s The Biointelligence Explosion (or in this “extended abstract”).
Interesting. Yes I guess such “full-spectrum superintelligence” might well be good by default, but the main worry from the perspective of the Yudkowsky/Bostrom paradigm is not this—perhaps it’s better described as super-optimisation, or super-capability (i.e. a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it’s likely initial goals).
Regarding feasibility of conscious AGI / Pearce’s full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains). Or maybe quantum computers would be enough?
Pearce calls it “full-spectrum” to emphasise the difference w/ Bostrom’s “Super-Watson” (using Pearce’s words).
Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won’t notice or won’t be able to stop.
Actually, if I remember correctly, Pearce thinks that if “full-spectrum superintelligence” is going to emerge, it’s most likely to be biological, and even post-human (ie it is human descendants who will poses such super minds, not (purely?) silicon-based machines). Pearce sometimes calls this “biotechnological singularity”, or “BioSingularity” for short, analogously to Kurzweil’s “technological singularity”. One can read more about this in Pearce’s The Biointelligence Explosion (or in this “extended abstract”).