Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Greg_Colbourn ⏸️
Agree. But I’m sceptical that we could robustly align or control a large population of such AIs (and how would we cap the population?), especially considering the speed advantage they are likely to have.
Yeah, I think a lot of the overall debate—including what is most ethical to focus on(!) -- depends on AI trajectories and control.
What level of intelligence are you imagining such a system as being at? Some percentile on the scale of top performing humans? Somewhat above the most intelligent humans?
Not sure why this is downvoted, it isn’t a rhetorical question—I genuinely want to know the answer.
Fair point. But AI is indeed unlikely to top out at merely “slighlty more” intelligent. And it has the potential for a massive speed/numbers advantage too.
Why do you think this? What make you think that it’s possible at all?[1] And what do you mean by “large minority”? Can you give an approximate percentage?
- ^
Or to paraphrase Yampolskiy: what makes it possible for a less intelligent species to indefinitely control a more intelligent species (when this has never happened before)?
- ^
Thinking about it some more, I think I mean something more like “subjective decades of strategising and preparation at the level of intelligence of the second mover”, so it would be able to counter anything the second mover does to try and gain power.
But also there would be software intelligence explosion effects (I think the figures you have in your footnote 37 are overly conservative—human level is probably closer to “GPT-5″).
I don’t think this is likely to happen though, absent something like moral realism being true, centred around sentient experiences, and the AI discovering this.
I think it hinges on whether our AI successors would be counted as “life”, or whether they “matter morally”. I think the answer is likely no to both[1]. Therefore the risk of extinction boils down to risk of misaligned ASI wiping out the biosphere. Which I think is ~90% likely this century on the default trajectory, absent a well enforced global moratorium on ASI development.
- ^
Or at least “no” to the latter; if we consider viruses to be life that don’t matter morally, or are in fact morally negative, we can consider (default rogue) ASI to be similar.
- ^
Fair point. But this applies to a lot of things in EA. We give what we can.
Any realistic Pause would not be lifted absent a global consensus on proceeding with whatever risk remains.
the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover.
I don’t think it comes meaningfully earlier. It might only be a few months (an AI capable of doing the work of a military superpower would be capable of doing most work involved in AI R&D, precipitating an intelligence explosion). And the humans wielding the power will lose it to the AI too, unless they halt all further development of AI (which seems unlikely, due to hubris/complacency, if nothing else).
starting off with almost none (which will be true of the ASI)
Any ASI worthy of the name would probably be able to go straight for an unstoppable nanotech computronium grey goo scenario.
To bank on that we would need to have established at least some solid theoretical grounds for believing it’s possible—do you know of any? I think in fact we are closer to having the opposite: solid theoretical grounds for believing it’s impossible!
I could believe consciousness arose more than once in the tree of life (convergent evolution has happened for other things like eyes and flight).
But also, it’s probably a sliding scale, and the simple ancestor may well be at least minimally conscious.in which case consciousness would seem easy to reproduce (wire together some ~1,000 nerves).
Fair point. AI could well do this (and go as far as uploading into much larger biological structures, as I pointed to above).
a pause might make multipolar scenarios more likely by giving more groups time to build AGI
That wouldn’t really be a pause! A proper Pause (or moratorium) would include a global taboo on AGI research to the point where as few people would be doing it as are working on eugenics now (and they would be relatively easy to stop).
When an EA cares for their family taking away time from extinction risk they’re valuing their family as much as 10^N people.
Not sure exactly what you mean here—do you mean attending to family matters (looking after family) taking away time from working on extinction risk reduction?
In practice I spend more resources on extinction risk reduction. Part of this is just because I’d really prefer not to die in my 30s.
Thanks for saying this. I feel likewise (but s/30s/40s :))
Or more basic things like religion, nationalism. People will want to shape their utopias in the image of their religious concept of heaven, and the idealised versions of their countries.
Vinding says:
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.