To add on to what you already have, there’s also a flavor of “urgency / pessimism despite slow takeoff” that comes from pessimistic answers to the following 2 questions:
How early do the development paths between “safe AGI” and “default AGI” diverge?
On one extreme, they might not diverge at all: we build “default AGI”, and fix problems as we find them, and we wind up with “safe AGI”. On the opposite extreme, they may diverge very early (or already!), with entirely different R&D paths requiring dozens of non-overlapping insights and programming tools and practices.
I personally put a lot of weight on “already”, on the theory that there are right now dozens of quite different lines of ongoing ML / AI research that seem to lead towards quite different AGI destinations, and it seems implausible to me that they will all wind up at the same destination (or fail), or that the destinations will all be more-or-less equally good / safe / beneficial.
If we know how to build an AGI in a way that is knowably and unfixably dangerous, can we coordinate on not doing so?
One extreme would be “yes we can coordinate, even if there’s already code for such an AGI published on GitHub that runs on commodity hardware”. The other extreme would be “No, we can’t coordinate; the best we can hope for is delaying the inevitable, hopefully long enough to develop a safe AGI along a different path.”
Again I personally put a lot of weight on the pessimistic view, see my discussion here; but others seem to be more optimistic that this kind of coordination problem might be solvable, e.g. Rohin Shah here.
To add on to what you already have, there’s also a flavor of “urgency / pessimism despite slow takeoff” that comes from pessimistic answers to the following 2 questions:
How early do the development paths between “safe AGI” and “default AGI” diverge?
On one extreme, they might not diverge at all: we build “default AGI”, and fix problems as we find them, and we wind up with “safe AGI”. On the opposite extreme, they may diverge very early (or already!), with entirely different R&D paths requiring dozens of non-overlapping insights and programming tools and practices.
I personally put a lot of weight on “already”, on the theory that there are right now dozens of quite different lines of ongoing ML / AI research that seem to lead towards quite different AGI destinations, and it seems implausible to me that they will all wind up at the same destination (or fail), or that the destinations will all be more-or-less equally good / safe / beneficial.
If we know how to build an AGI in a way that is knowably and unfixably dangerous, can we coordinate on not doing so?
One extreme would be “yes we can coordinate, even if there’s already code for such an AGI published on GitHub that runs on commodity hardware”. The other extreme would be “No, we can’t coordinate; the best we can hope for is delaying the inevitable, hopefully long enough to develop a safe AGI along a different path.”
Again I personally put a lot of weight on the pessimistic view, see my discussion here; but others seem to be more optimistic that this kind of coordination problem might be solvable, e.g. Rohin Shah here.