All right, I’ll come back for one more question. Thanks, Wei. Tough question. Briefly,
(1) I can’t see that many paths to victory. The only ones I can see go through either (a) aligned de-novo AGI (which needs to be at least powerful enough to safely prevent maligned systems from undergoing intelligence explosions) or (b) very large amounts of global coordination (which would be necessary to either take our time & go cautiously, or to leap all the way to WBE without someone creating a neuromorph first). Both paths look pretty hard to walk, but in short, (a) looks slightly more promising to me. (Though I strongly support any attempts to widen path (b)!)
(2) It seems to me that the default path leads almost entirely to UFAI: insofar as MIRI research makes it easier for others to create UFAI, most of that effect isn’t replacing wins with losses, it’s just making the losses happen sooner. By contrast, this sort of work seems necessary in order to keep path (a) open. I don’t see many other options. (In other words, I think it’s net positive because it creates some wins and moves some losses sooner, and that seems like a fair trade to me.)
To make that a bit more concrete, consider logical uncertainty: if we attain a good formal understanding of logically uncertain reasoning, that’s quite likely to shorten AI timelines. But I think I’d rather have a 10-year time horizon and be dealing with practical systems built upon solid foundations that come from a decade’s worth of formally understanding what good logically uncertain reasoning looks like, rather than a 20-year time horizon where we have to deal with systems built using 19 years of hacks and 1 year of patches bolted on at the end.
(In other words, the possibility of improving AI capabilities is the price you have to pay to keep path (a) open.)
A bunch of other factors also play into my considerations (including a heuristic which says “the best way to figure out which problems are the real problems is to start solving the things that appear to be the problems,” and another heuristic which says “if you see a big fire, try to put it out, and don’t spend too much time worrying about whether putting it out might actually start worse fires elsewhere”, and a bunch of others), but those are the big considerations, I think.
All right, I’ll come back for one more question. Thanks, Wei. Tough question. Briefly,
(1) I can’t see that many paths to victory. The only ones I can see go through either (a) aligned de-novo AGI (which needs to be at least powerful enough to safely prevent maligned systems from undergoing intelligence explosions) or (b) very large amounts of global coordination (which would be necessary to either take our time & go cautiously, or to leap all the way to WBE without someone creating a neuromorph first). Both paths look pretty hard to walk, but in short, (a) looks slightly more promising to me. (Though I strongly support any attempts to widen path (b)!)
(2) It seems to me that the default path leads almost entirely to UFAI: insofar as MIRI research makes it easier for others to create UFAI, most of that effect isn’t replacing wins with losses, it’s just making the losses happen sooner. By contrast, this sort of work seems necessary in order to keep path (a) open. I don’t see many other options. (In other words, I think it’s net positive because it creates some wins and moves some losses sooner, and that seems like a fair trade to me.)
To make that a bit more concrete, consider logical uncertainty: if we attain a good formal understanding of logically uncertain reasoning, that’s quite likely to shorten AI timelines. But I think I’d rather have a 10-year time horizon and be dealing with practical systems built upon solid foundations that come from a decade’s worth of formally understanding what good logically uncertain reasoning looks like, rather than a 20-year time horizon where we have to deal with systems built using 19 years of hacks and 1 year of patches bolted on at the end.
(In other words, the possibility of improving AI capabilities is the price you have to pay to keep path (a) open.)
A bunch of other factors also play into my considerations (including a heuristic which says “the best way to figure out which problems are the real problems is to start solving the things that appear to be the problems,” and another heuristic which says “if you see a big fire, try to put it out, and don’t spend too much time worrying about whether putting it out might actually start worse fires elsewhere”, and a bunch of others), but those are the big considerations, I think.