So in the debate week statement (footnote 2) it says “earth-originating intelligent life”. What if you disagree that AI counts as “life”? I expect that a singleton ASI will take over and will not be sentient or conscious, or value anything that humans value (i.e. the classic Yudkowskian scenario).
I’m surprised you think future AI would be so likely to be conscious, given the likely advantages of creating non-conscious systems in terms of simplicity and usefulness. (If consciousness is required for much greater intelligence, I would feel differently, but that seems very non-obvious!)
Do you think octopuses are conscious? I do — they seem smarter than chickens, for instance. But their most recent common ancestor with vertebrates was some kind of simple Precambrian worm with a very basic nervous systems.
Either that most recent ancestor was not phenomenally conscious in the sense we have in mind, in which case consciousness arose more than once in the tree of life. Or else it was conscious, in which case consciousness would seem easy to reproduce (wire together some ~1,000 nerves).
I don’t think this is likely to happen though, absent something like moral realism being true, centred around sentient experiences, and the AI discovering this.
Singleton takeover seems very likely simply down to the speed advantage of the first mover (at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind).
at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind
Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don’t think that’s enough to secure it’s position as a singleton.
If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I’m skeptical (for reasons I’m happy to elaborate on).
Thinking about it some more, I think I mean something more like “subjective decades of strategising and preparation at the level of intelligence of the second mover”, so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative—human level is probably closer to “GPT-5″).
So in the debate week statement (footnote 2) it says “earth-originating intelligent life”. What if you disagree that AI counts as “life”? I expect that a singleton ASI will take over and will not be sentient or conscious, or value anything that humans value (i.e. the classic Yudkowskian scenario).
Why so confident that:
- It’ll be a singleton AI that takes over
- That it will not be conscious?
I’m at 80% or more that there will be a lot of conscious AIs, if AI takes over.
I’m surprised you think future AI would be so likely to be conscious, given the likely advantages of creating non-conscious systems in terms of simplicity and usefulness. (If consciousness is required for much greater intelligence, I would feel differently, but that seems very non-obvious!)
Not be conscious: shares no evolutionary history or biology with us (I guess it’s possible it could find a way to upload itself into biology though..)
Do you think octopuses are conscious? I do — they seem smarter than chickens, for instance. But their most recent common ancestor with vertebrates was some kind of simple Precambrian worm with a very basic nervous systems.
Either that most recent ancestor was not phenomenally conscious in the sense we have in mind, in which case consciousness arose more than once in the tree of life. Or else it was conscious, in which case consciousness would seem easy to reproduce (wire together some ~1,000 nerves).
I could believe consciousness arose more than once in the tree of life (convergent evolution has happened for other things like eyes and flight).
But also, it’s probably a sliding scale, and the simple ancestor may well be at least minimally conscious.
Fair point. AI could well do this (and go as far as uploading into much larger biological structures, as I pointed to above).
I don’t think this is likely to happen though, absent something like moral realism being true, centred around sentient experiences, and the AI discovering this.
Singleton takeover seems very likely simply down to the speed advantage of the first mover (at the sharp end of the intelligence explosion it will be able to do subjective decades of R&D before the second mover gets off the ground, even if the second mover is only hours behind).
Where are you getting those numbers from? If by “subjective decades” you mean “decades of work by one smart human researcher”, then I don’t think that’s enough to secure it’s position as a singleton.
If you mean “decades of global progress at the global tech frontier” then imagining that the first-mover can fit ~100 million human research-years into a few hours shortly after (presumably) pulling away from the second-mover in a software intelligence explosion, then I’m skeptical (for reasons I’m happy to elaborate on).
Thinking about it some more, I think I mean something more like “subjective decades of strategising and preparation at the level of intelligence of the second mover”, so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative—human level is probably closer to “GPT-5″).
Interesting. What makes you confident about AI consciousness?
Not sure why this is downvoted, it isn’t a rhetorical question—I genuinely want to know the answer.