Eric Drexler: Reframing Superintelligence

When peo­ple first be­gan to dis­cuss ad­vanced ar­tifi­cial in­tel­li­gence, ex­ist­ing AI was rudi­men­tary at best, and we had to re­ply on ideas about hu­man think­ing and ex­trap­o­late. Now, how­ever, we’ve de­vel­oped many differ­ent ad­vanced AI sys­tems, some of which out­perform hu­man think­ing on cer­tain tasks. In this talk from EA Global 2018: Lon­don, Eric Drexler ar­gues that we should use this new data to re­think our mod­els for how su­per­in­tel­li­gent AI is likely to emerge and func­tion.

A tran­script of Eric’s talk is be­low, which CEA has lightly ed­ited for clar­ity. You can also watch this talk on YouTube, or read the tran­script on effec­tivealtru­ism.org.

The Talk

I’ve been work­ing in this area for quite a while. The chair­man of my doc­toral com­mit­tee was one Marvin Min­sky. We had some dis­cus­sions on AI safety around 1990. He said I should write them up. I fi­nally got around to writ­ing up some de­vel­oped ver­sions of those ideas just very re­cently, so that’s some fairly se­ri­ous pro­cras­ti­na­tion. Decades of pro­cras­ti­na­tion on some­thing im­por­tant.

For years, one couldn’t talk about ad­vanced AI. One could talk about nan­otech­nol­ogy. Now it’s the other way around. You can talk about ad­vanced AI, but not about ad­vanced nan­otech­nol­ogy. So this is how the Over­ton win­dow moves around.

What I would like to do is to give a very brief pre­sen­ta­tion which is pretty closely al­igned with talks I’ve given at OpenAI, Deep­Mind, FHI, and Bay Area Ra­tion­al­ists. Usu­ally I give this pre­sen­ta­tion to a some­what smaller num­ber of peo­ple, and struc­ture it more around dis­cus­sion. But what I would like to do, still, is to give a short talk, put up points for dis­cus­sion, and en­courage some­thing be­tween Q&A and dis­cus­sion points from the au­di­ence.

Okay so, when I say “Refram­ing Su­per­in­tel­li­gence,” what I mean is think­ing about the con­text of emerg­ing AI tech­nolo­gies as a pro­cess rol­ling for­ward from what we see to­day. And ask­ing, “What does that say about likely paths for­ward?” Such that what­ever it is that you’re imag­in­ing needs to emerge from that con­text or make sense in that con­text. Which I think re­frames a lot of the clas­sic ques­tions. Most of the ques­tions don’t go away, but the con­text in which they arise, the tools available for ad­dress­ing prob­lems, look differ­ent. That’s what we’ll be get­ting into.

Once upon a time, when we thought about ad­vanced AI, we didn’t re­ally know what AI sys­tems were likely to look like. It was very un­known. Peo­ple thought in terms of de­vel­op­ments in logic and other kinds of ma­chine learn­ing, differ­ent from the deep learn­ing that we now see mov­ing for­ward with as­tound­ing speed. And peo­ple reached for an ab­stract model of in­tel­li­gent sys­tems. And what in­tel­li­gent sys­tems do we know? Well, ac­tors in the world like our­selves. We ab­stract from that very heav­ily and you end up with ra­tio­nal, util­ity-di­rected agents.

To­day, how­ever, we have an­other source of in­for­ma­tion be­yond that ab­stract rea­son­ing, which ap­plies to a cer­tain class of sys­tems. And in­for­ma­tion that we have comes from the world around us. We can look at what’s ac­tu­ally hap­pen­ing now, and how AI sys­tems are de­vel­op­ing. And so we can ask ques­tions like, “Where do AI sys­tems come from?” Well, to­day they come from re­search and de­vel­op­ment pro­cesses. We can ask, “What do AI sys­tems do to­day?” Well, broadly speak­ing, they perform tasks. Which I think of, or will de­scribe, as “perform­ing ser­vices.” They do some ap­prox­i­ma­tion or they do some­thing that some­one sup­pos­edly wants in bounded time with bounded re­sources. What will they be able to do? Well, if we take AI se­ri­ously, AI sys­tems will be able to au­to­mate asymp­tot­i­cally all hu­man tasks, and more, at a piece­meal and asymp­tot­i­cally gen­eral su­per­in­tel­li­gent level. So we said AI sys­tems come from re­search and de­vel­op­ment. Well, what is re­search and de­vel­op­ment? Well, it’s a bunch of tasks to au­to­mate. And, in par­tic­u­lar, they’re rel­a­tively nar­row tech­ni­cal tasks which are, I think, un­con­tro­ver­sially au­to­mate-able on the path to ad­vanced AI.

So the pic­ture is of AI de­vel­op­ment mov­ing for­ward broadly along the lines that we’re see­ing. Higher-level ca­pa­bil­ities. More and more au­toma­tion of the AI R&D pro­cess it­self, which is an on­go­ing pro­cess that’s mov­ing quite rapidly. AI-en­abled au­toma­tion and also clas­si­cal soft­ware tech­niques for au­tomat­ing AI re­search and de­vel­op­ment. And that, of course, leads to ac­cel­er­a­tion. Where does that lead? It leads to some­thing like re­cur­sive im­prove­ment, but not the clas­sic re­cur­sive im­prove­ment of an agent that is striv­ing to be a more in­tel­li­gent, more ca­pa­ble agent. But, in­stead, re­cur­sive im­prove­ment where an AI tech­nol­ogy base is be­ing ad­vanced at AI speed. And that’s a de­vel­op­ment that can hap­pen in­cre­men­tally. We see it hap­pen­ing now as we take steps to­ward ad­vanced AI that is ap­pli­ca­ble to in­creas­ingly gen­eral and fast learn­ing. Well, those are tech­niques that will in­evitably be folded into the on­go­ing AI R&D pro­cess. Devel­op­ers, given some ad­vance in al­gorithms and learn­ing tech­niques, and a con­cep­tu­al­iza­tion of how to ad­dress more and more gen­eral tasks, will pounce on those, and in­cor­po­rate them into a broader and broader range of AI ser­vices.

So where that leads is to asymp­tot­i­cally com­pre­hen­sive AI ser­vices. Which, cru­cially, in­cludes the ser­vice of de­vel­op­ing new ser­vices. So in­creas­ingly ca­pa­ble, in­creas­ingly broad, in­creas­ingly piece­meal and com­pre­hen­sively su­per­in­tel­li­gent sys­tems that can work with peo­ple, and in­ter­act with peo­ple in many differ­ent ways to provide the ser­vice of de­vel­op­ing new ser­vices. And that’s a kind of gen­er­al­ity. That is a gen­eral kind of ar­tifi­cial in­tel­li­gence. So a key point here is that the C in CAIS, C in Com­pre­hen­sive AI Ser­vices does the work of the G in AGI. Why is it a differ­ent term? To avoid the im­pli­ca­tion… when peo­ple say AGI they mean AGI agent. And we can dis­cuss the role of agents in the con­text of this pic­ture. But I think it’s clear that a tech­nol­ogy base is not in­her­ently in it­self an agent. In this pic­ture agents are not cen­tral, they are prod­ucts. They are use­ful prod­ucts of di­verse kinds for pro­vid­ing di­verse ser­vices. And so with that, I would like to (as I said, the for­mal part here will be short) point to a set of top­ics.

They kind of break into two cat­e­gories. One is about short paths to su­per­in­tel­li­gence, and I’ll ar­gue that this is the short path. The topic of AI ser­vices and agents, in­clud­ing agent ser­vices, ver­sus the con­cept of “The AI” which looms very large in peo­ple’s con­cepts of fu­ture AI. I think we should look at that a lit­tle bit more closely. Su­per­in­tel­li­gence as some­thing dis­tinct from agents, su­per­in­tel­li­gent non-agents. And the dis­tinc­tion be­tween gen­eral learn­ing and uni­ver­sal com­pe­tence. Peo­ple have, I think, mis­con­strued what in­tel­li­gence means and I’ll take a mo­ment on that. If you look at defi­ni­tions of good from the 1960s, ul­tra-in­tel­li­gence and more re­cent Bostrom and so on (I work across the hall from Nick) on su­per­in­tel­li­gence the defi­ni­tion is some­thing like “a sys­tem able to out­perform any per­son in any task what­so­ever.” Well, that im­plies gen­eral com­pe­tence, at least as or­di­nar­ily read. But there’s some am­bi­guity over what we mean by the word “in­tel­li­gence” more gen­er­ally. We call chil­dren in­tel­li­gent and we call se­nior ex­perts in­tel­li­gent. We call a child in­tel­li­gent be­cause the child can learn, not be­cause the child can perform at a high level in any par­tic­u­lar area. And we call an ex­pert who can perform at a high level in­tel­li­gent not be­cause the ex­pert can learn—in prin­ci­ple you could turn off learn­ing ca­pac­ity in the brain—but be­cause the ex­pert can solve difficult prob­lems at a high level.

So learn­ing and com­pe­tence are dis­so­cia­ble com­po­nents of in­tel­li­gence. They are in fact quite dis­tinct in ma­chine learn­ing. There is a learn­ing pro­cess and then there is an ap­pli­ca­tion of the soft­ware. And when you see dis­cus­sion of in­tel­li­gent sys­tems that does not dis­t­in­guish be­tween learn­ing and prac­tice, and treats ac­tion as en­tailing learn­ing di­rectly, there’s a con­fu­sion there. There’s a con­fu­sion about what in­tel­li­gence means and that’s, I think, very fun­da­men­tal. In any event, look­ing to­ward safety-re­lated con­cerns, there are things to be said about pre­dic­tive mod­els of hu­man con­cerns. AI-en­abled solu­tions to AI-con­trol prob­lems. How this re­frames ques­tions of tech­ni­cal AI safety. Is­sues of ser­vices ver­sus ad­dic­tion, ad­dic­tive ser­vices and ad­ver­sar­ial ser­vices. Ser­vices in­clude ser­vices you don’t want. Tak­ing su­per­in­tel­li­gent ser­vices se­ri­ously. And a ques­tion of whether faster de­vel­op­ment is bet­ter.

And, with that, I would like to open for ques­tions, dis­cus­sion, com­ment. I would like to have peo­ple come away with some shared sense of what the ques­tions and com­ments are. Some com­mon knowl­edge of think­ing in this com­mu­nity in the con­text of think­ing about ques­tions this way.

Discussion

Ques­tion: Is your model com­pat­i­ble with end-to-end re­in­force­ment learn­ing?

Eric: Yes.

To say a lit­tle bit more. By the way, I’ve been work­ing on a col­lec­tion of doc­u­ments for the last two years. It’s now very large, and it will be an FHI tech­ni­cal re­port soon. It’s 30,000 words struc­tured to be very skim-able. Top-down, hi­er­ar­chi­cal, declar­a­tive sen­tences ex­pand­ing into longer ones, ex­pand­ing into sum­maries, ex­pand­ing into fine-grained top­i­cal dis­cus­sion. So you can sort of look at the top level and say, hope­fully, “Yes, yes, yes, yes, yes. What about this?” And not have to read any­thing like 30,000 words. So, what I would say is that re­in­force­ment learn­ing is a tech­nique for AI sys­tem de­vel­op­ment. You have a re­in­force­ment learn­ing sys­tem. It pro­duces through a re­in­force­ment learn­ing pro­cess, which is a way of ma­nipu­lat­ing the learn­ing of be­hav­iors. It pro­duces sys­tems that are shaped by that mechanism. So it’s a de­vel­op­ment mechanism for pro­duc­ing sys­tems that provide some ser­vice. Now if you turn re­in­force­ment learn­ing loose in the world open-ended, read-write ac­cess to the in­ter­net, a money-max­i­mizer and did not have checks in place against that? There are some nasty sce­nar­ios. So ba­si­cally it’s a de­vel­op­ment tech­nique, but could also be turned loose to pro­duce some real prob­lems. “Creative sys­tems try­ing to ma­nipu­late the world in bad ways” sce­nar­ios are an­other sec­tor of re­in­force­ment learn­ing. So not a prob­lem per se, but one can cre­ate prob­lems us­ing that tech­nique.

Ques­tion: What does asymp­totic im­prove­ment of AI ser­vices mean?

Eric: I think I’m abus­ing the term asymp­totic. What I mean is in­creas­ing scope and in­creas­ing level of ca­pa­bil­ity in any par­tic­u­lar task to some ar­bi­trary limit. Com­pre­hen­sive is sort of like say­ing in­finite, but mov­ing to­ward com­pre­hen­sive and su­per­in­tel­li­gent level ser­vices. What it’s in­tended to say is, on­go­ing pro­cess go­ing that di­rec­tion. If some­one has a bet­ter word than asymp­totic to de­scribe that I’d be very happy.

Ques­tion: Can the tech gi­ants like Face­book and Google be trusted to get al­ign­ment right?

Eric: Google more than Face­book. We have that differ­en­tial. I think that ques­tions of al­ign­ment look differ­ent here. I think more in terms of ques­tions of ap­pli­ca­tion. What are the peo­ple who wield AI ca­pa­bil­ities try­ing to ac­com­plish? So there’s a pic­ture which, just back­ground to the fram­ing of that ques­tion, and a lot of these ques­tions I think I’ll be step­ping back and ask­ing about fram­ing. As you might think from the ti­tle of the talk. So pic­ture a ris­ing set of AI ca­pa­bil­ities: image recog­ni­tion, lan­guage un­der­stand­ing, plan­ning, tac­ti­cal man­age­ment in bat­tle, strate­gic plan­ning for pat­terns of ac­tion in the world to ac­com­plish some goals in the world. Ris­ing lev­els of ca­pa­bil­ity in those tasks. Those ca­pa­bil­ities could be ex­ploited by hu­man de­ci­sion mak­ers or could, in prin­ci­ple, be ex­ploited by a very high-level AI sys­tem. I think we should be fo­cus­ing more, not ex­clu­sively, but more on hu­man de­ci­sion mak­ers us­ing those ca­pa­bil­ities than on high-level AI sys­tems. In part be­cause hu­man de­ci­sion mak­ers, I think, are go­ing to have broad strate­gic un­der­stand­ing more rapidly. They’ll know how to get away with things with­out fal­ling afoul of what no­body had seen be­fore, which is in­tel­li­gence agen­cies watch­ing and see­ing what you’re do­ing. It’s very hard for a re­in­force­ment learner to learn that kind of thing.

So I tend to worry about not the or­ga­ni­za­tions mak­ing al­igned AI so much as whether the or­ga­ni­za­tions them­selves are al­igned with gen­eral goals.

Ques­tion: Could you de­scribe the path to su­per­in­tel­li­gent ser­vices with cur­rent tech­nol­ogy, us­ing more con­crete ex­am­ples?

Eric: Well, we have a lot of piece­meal ex­am­ples of su­per­in­tel­li­gence. AlphaZero is su­per­in­tel­li­gent in the nar­row do­main of Go. There are sys­tems that out­perform hu­man be­ings in play­ing these very differ­ent kinds of games, like Atari games. Face recog­ni­tion re­cently sur­passed hu­man abil­ity to map from hu­man speech to tran­scrip­tive words. Just more and more ar­eas piece­meal. A key area that I find im­pres­sive and im­por­tant is the de­sign of neu­ral net­works at the core of mod­ern deep learn­ing sys­tems. The de­sign of and learn­ing to use ap­pro­pri­ately, hy­per­pa­ram­e­ters. So, as of a cou­ple of years ago, if you wanted a new neu­ral net­work, a con­volu­tional net­work for vi­sion, or some re­cur­rent net­work, though re­cently they’re go­ing for con­volu­tion net­works for lan­guage un­der­stand­ing and trans­la­tion, that was a hand-crafted pro­cess. You had hu­man judg­ment and peo­ple were build­ing these net­works. A cou­ple of years ago peo­ple started in these, this is not AI in gen­eral but it’s a chunk that a lot of at­ten­tion went into, get­ting su­per­hu­man perfor­mance in neu­ral net­works by au­to­mated, AI-fla­vored like, for ex­am­ple, re­in­force­ment learn­ing sys­tems. So de­vel­op­ing re­in­force­ment learn­ing sys­tems that learn to put to­gether the build­ing blocks to make a net­work that out­performs hu­man de­sign­ers in that pro­cess. So we now have AI sys­tems that are de­sign­ing a core part of AI sys­tems at a su­per­hu­man level. And this is not rev­olu­tioniz­ing the world, but that thresh­old has been crossed in that area.

And, similarly, au­toma­tion of an­other la­bor-in­ten­sive task that I was told very re­cently by a se­nior per­son at Deep­Mind would re­quire hu­man judg­ment. And my re­sponse was, “Do you take AI se­ri­ously or not?” And, out of Deep­Mind it­self, there was then a pa­per that showed how to out­perform hu­man be­ings in hy­per­pa­ram­e­ter se­lec­tion. So those are a few ex­am­ples. And the way one gets to an ac­cel­er­at­ing path is to have more and more, faster and faster im­ple­men­ta­tion of hu­man in­sights into AI ar­chi­tec­tures, train­ing meth­ods, and so on. Less and less hu­man la­bor re­quired. Higher and higher level hu­man in­sights be­ing turned into ap­pli­ca­tion through­out the ex­ist­ing pool of re­sources. And, even­tu­ally, fewer and fewer hu­man in­sights be­ing nec­es­sary.

Ques­tion: So what are the con­se­quences of this re­fram­ing of su­per­in­tel­li­gence for tech­ni­cal AI safety re­search?

Eric: Well, re-con­tex­ting. If in fact one can have su­per­in­tel­li­gent sys­tems that are not in­her­ently dan­ger­ous, then one can ask how one can lev­er­age high-level AI. So a lot of the clas­sic sce­nar­ios of mis­al­igned pow­er­ful AI in­volve AI sys­tems that are tak­ing ac­tions that are blatantly un­de­sir­able. And, as Shane Legg said when I was pre­sent­ing this at Deep­Mind last Fall, “There’s an as­sump­tion that we have su­per­in­tel­li­gence with­out com­mon sense.” And that’s a lit­tle strange. So Stu­art Rus­sell has pointed out that ma­chines can learn not only from ex­pe­rience, but from read­ing. And, one can add, watch­ing video and in­ter­act­ing with peo­ple and through ques­tions and an­swers in par­allel over the in­ter­net. And we see in AI that a ma­jor class of sys­tems is pre­dic­tive mod­els. Given some in­put you pre­dict what the next thing will be. In this case, given a de­scrip­tion of a situ­a­tion or an ac­tion, you try to pre­dict what peo­ple will think of it. Is it some­thing that they care about or not? And, if they do care about it, is there wide­spread con­sen­sus that that would be a bad re­sult? Wide­spread con­sen­sus that it would be a good re­sult? Or strongly mixed opinion?

Note that this is a pre­dic­tive model trained on many ex­am­ples, it’s not an agent. That is an or­a­cle that, in prin­ci­ple, could op­er­ate with rea­son­ing be­hind the pre­dic­tion. That could in prin­ci­ple op­er­ate at a su­per in­tel­li­gent level, and would have com­mon sense about what peo­ple care about. Now think about hav­ing AI sys­tems that you in­tend to be al­igned with hu­man con­cerns where, available for a sys­tem that’s plan­ning ac­tion, is this or­a­cle. It can say, “Well, if such and such hap­pened, what would peo­ple think of it?” And you’d have a very high-qual­ity re­sponse. That’s a re­source that I think one should take ac­count of in tech­ni­cal AI safety. We’re very un­likely to get high-level AI with­out hav­ing this kind of re­source. Peo­ple are very in­ter­ested in pre­dict­ing hu­man de­sires and con­cerns if only be­cause they want to sell you prod­ucts or brain­wash you in poli­tics or some­thing. And that’s the same un­der­ly­ing AI tech­nol­ogy base. So I would ex­pect that we will have pre­dic­tive mod­els of hu­man con­cerns. That’s an ex­am­ple of a re­source that would re­frame some im­por­tant as­pects of tech­ni­cal AI safety.

Ques­tion: So, mak­ing AI ser­vices more gen­eral and pow­er­ful in­volves giv­ing them higher-level goals. At what point of com­plex­ity and gen­er­al­ity do these ser­vices then be­come agents?

Eric: Well, many ser­vices are agent-ser­vices. A chronic ques­tion that arises, peo­ple will be at FHI or Deep­Mind and some­one will say, “Well, what is an agent any­way?” And ev­ery­body will say, “Well, there is no sharp defi­ni­tion. But over here we’re talk­ing about agents and over here we’re clearly not talk­ing about agents.” So I would be in­clined to say that if a sys­tem is best thought of as di­rected to­ward goals and it’s do­ing some kind of plan­ning and in­ter­act­ing with the world I’m in­clined to call it an agent. And, by that defi­ni­tion, there are many, many ser­vices we want, start­ing with au­tonomous ve­hi­cles, au­tonomous cars and such, that are agents. They have to make de­ci­sions and plan. So there’s a spec­trum from there up to higher and higher level abil­ities to do means-ends anal­y­sis and plan­ning and to im­ple­ment ac­tions. So let’s imag­ine that your goal is to have a sys­tem that is use­ful in mil­i­tary ac­tion and you would like to have the abil­ity to ex­e­cute tac­tics with AI speed and flex­i­bil­ity and in­tel­li­gence, and have strate­gic plans for us­ing those tac­tics that are su­per­in­tel­li­gent level.

Well, those are all ser­vices. They’re do­ing some­thing in bounded time with bounded re­sources. And, I would ar­gue, that that set of sys­tems would in­clude many sys­tems that we would call agents but they would be pur­su­ing bounded tasks with bounded goals. But the higher lev­els of plan­ning would nat­u­rally be struc­tured as sys­tems that would give op­tions to the top level de­ci­sion mak­ers. Th­ese de­ci­sion mak­ers would not want to give up their power, they don’t want a sys­tem guess­ing what they want. At a strate­gic level they have a chance to se­lect, since strat­egy un­folds rel­a­tively slowly. So there would be op­por­tu­ni­ties to say, “Well, don’t guess, but here’s the trade off I’m will­ing to make be­tween hav­ing this kind of im­pact on op­po­si­tion forces with this kind of lethal­ity to civili­ans and this kind of im­pact on in­ter­na­tional opinion. I would like op­tions that show me differ­ent trade-offs. All very high qual­ity but within that trade-off space. And here I’m de­liber­ately choos­ing an ex­am­ple which is about AI re­sources be­ing used for pro­ject­ing power in the world. I think that’s a challeng­ing case, so it’s a good place to go.

I’d like to say just a lit­tle bit about the op­po­site end, briefly. Su­per­in­tel­li­gent non-agents. Here’s what I think is a good paradig­matic ex­am­ple of su­per­in­tel­li­gence and non-agency. Right now we have sys­tems that do nat­u­ral lan­guage trans­la­tion. You put in sen­tences or, if you had a some­what smarter sys­tem that dealt with more con­text, books, and out comes text in a differ­ent lan­guage. Well, I would like to have sys­tems that know a lot to do that. You do bet­ter trans­la­tions if you un­der­stand more about his­tory, chem­istry if it’s a chem­istry book, hu­man mo­ti­va­tions. Just, you’d like to have a sys­tem that knows ev­ery­thing about the world and ev­ery­thing about hu­man be­ings to give bet­ter qual­ity trans­la­tions. But what is the sys­tem? Well, it’s a product of R&D and it is a math­e­mat­i­cal func­tion of type char­ac­ter string to char­ac­ter string. You put in a char­ac­ter string, things hap­pen, and out comes a trans­la­tion. You do this again and again and again. Is that an agent? I think not. Is it op­er­at­ing at a su­per­in­tel­li­gent level with gen­eral knowl­edge of the world? Yes. So I think that one’s con­cep­tual model of what high-level AI is about should have room in it for that sys­tem and for many sys­tems that are analo­gous.

Ques­tion: Would a sys­tem ser­vice that com­bines gen­eral learn­ing with uni­ver­sal com­pe­tence not be more use­ful or com­pet­i­tive than a sys­tem that dis­plays ei­ther alone? So does this not sug­gest that agents might be more use­ful?

Eric: Well, as I said, agents are great. The ques­tion is what kind and for what scope. So, as I was say­ing, dis­t­in­guish­ing be­tween gen­eral learn­ing and uni­ver­sal com­pe­tence is an im­por­tant dis­tinc­tion. I think it is very plau­si­ble that we will have gen­eral learn­ing al­gorithms. And gen­eral learn­ing al­gorithms may be al­gorithms that are very good at se­lect­ing al­gorithms that are good at se­lect­ing al­gorithms for learn­ing a par­tic­u­lar task and in­vent­ing new al­gorithms. Now, given an al­gorithm for learn­ing, there’s a ques­tion of what you’re train­ing it to do. What in­for­ma­tion? What com­pe­ten­cies are be­ing de­vel­oped? And I think that the con­cept of a sys­tem be­ing trained on and learn­ing about ev­ery­thing in the world with some ob­jec­tive func­tion, I don’t think that’s a co­her­ent idea. Let’s say you have a re­in­force­ment learner. You’re re­in­forc­ing the sys­tem to do what? Here’s the world and it’s sup­posed to be get­ting com­pe­tence in or­ganic chem­istry and an­cient Greek and, I don’t know, con­trol of the mo­tion of ten­nis-play­ing robots and on and on and on and on. What’s the re­ward func­tion, and why do we think of that as one task?

I don’t think we think of it as one task. I think we think of it as a bunch of tasks which we can con­strue as ser­vices. In­clud­ing the ser­vice of in­ter­act­ing with you, learn­ing what you want, nu­ances. What you are as­sumed to want, what you’re as­sumed not to want as a per­son. More about your life and ex­pe­rience. And very good at in­ter­pret­ing your ges­tures. And it can go out in the world and, sub­ject to con­straints of law and con­sult­ing an or­a­cle on what other peo­ple are likely to ob­ject to, im­ple­ment plans that serve your pur­poses. And if the ac­tions are im­por­tant and have a lot of im­pact, within the law pre­sum­ably, what you want is for that sys­tem to give you op­tions be­fore the sys­tem goes out and takes ac­tion. And some of those ac­tions would in­volve what are clearly agents. So that’s the pic­ture I would like to paint that I think re­frames the con­text of that ques­tion.

Ques­tion: So on that is it fair to say that the value-al­ign­ment prob­lem still ex­ists within your frame­work? Since, in or­der to train a model to build an agent that is al­igned with our val­ues, we must still spec­ify our val­ues.

Eric: Well, what do you mean by, “train an agent to be al­igned with our val­ues.” See, the clas­sic pic­ture says you have “The AI” and “The AI” gets to de­cide what the fu­ture of the uni­verse looks like and it had bet­ter un­der­stand what we want or would want or should want or some­thing like that. And then we’re off into deep philos­o­phy. And my card says philos­o­phy on it, so I guess I’m offi­cially a philoso­pher or some­thing ac­cord­ing to Oxford. I was a lit­tle sur­prised. “It says philos­o­phy on it. Cool!” I do what I think of as philos­o­phy. So, in a ser­vices model, the ques­tion would in­stead be, “What do you want to do?” Give me some task that is com­pleted in bounded time with bounded re­sources and we could con­sider how to avoid mak­ing plans that stupidly cause dam­age that I don’t want. Plans that, by de­fault, au­to­mat­i­cally do what I could be as­sumed to want. And that pur­sue goals in some cre­ative way that is bounded, in the sense that it’s not about re­shap­ing the world; other forces would pre­sum­ably try to stop you. And I’m not quite sure what value al­ign­ment means in that con­text. I think it’s some­thing much more nar­row and par­tic­u­lar.

By the way, if you think of an AI sys­tem that takes over the world, keep in mind that a sub-task of that, part of that task, is to over­throw the gov­ern­ment of China. And, pre­sum­ably, to suc­ceed the first time be­cause oth­er­wise they’re go­ing to come af­ter you if you made a cred­ible at­tempt. And that’s in the pres­ence of un­known surveillance ca­pa­bil­ities and un­known AI that China has. So you have a sys­tem and it might for­mu­late plans to try to take over the world, well, I think an in­tel­li­gent sys­tem wouldn’t recom­mend that be­cause it’s a bad idea. Very risky. Very un­likely to suc­ceed. Not an ob­jec­tive that an in­tel­li­gent sys­tem would sug­gest or at­tempt to pur­sue. So you’re in a very small part of a sce­nario space where that at­tempt is made by a high-level AI sys­tem. And it’s a very small part of sce­nario space be­cause it’s an even smaller part of sce­nario space where there is sub­stan­tial suc­cess. I think it’s worth think­ing about this. I think it’s worth wor­ry­ing about it. But it’s not the dom­i­nant con­cern. It’s a con­cern in a frame­work where I think we’re fac­ing an ex­plo­sive growth of ca­pa­bil­ities that can am­plify many differ­ent pur­poses, in­clud­ing the pur­poses of bad ac­tors. And we’re see­ing that already and that’s what scares me.

Ques­tion: So I guess, in that vein, could the su­per­in­tel­li­gent ser­vices be used to take over the world by a state ac­tor? Just the ser­vices?

Eric: Well, you know, ser­vices in­clude tac­ti­cal ex­e­cu­tion of plans and strate­gic plan­ning. So could there be a way for a state ac­tor to do that us­ing AI sys­tems in the con­text of other ac­tors with, pre­sum­ably, a com­pa­rable level of tech­nol­ogy? Maybe so. It’s ob­vi­ously a very risky thing to do. One as­pect of pow­er­ful AI is an enor­mous ex­pan­sion of pro­duc­tive ca­pac­ity. Partly through, for ex­am­ple, high-level, high qual­ity au­toma­tion. More re­al­is­ti­cally, physics-limited pro­duc­tion tech­nol­ogy, which is out­side to­day’s sphere of dis­course or Over­ton win­dow.

Se­cu­rity sys­tems, I will as­sert, could some­day be both be­nign and effec­tive, and there­fore sta­bi­liz­ing. So the ar­gu­ment is that, even­tu­ally it will be visi­bly the case that we’ll have su­per­in­tel­li­gent level, very broad AI, enor­mous pro­duc­tive ca­pac­ity, and the abil­ity to have strate­gic sta­bil­ity, if we take the right mea­sures be­fore­hand to de­velop ap­pro­pri­ate sys­tems, or to be pre­pared to do that, and to have al­igned goals among many ac­tors. So if we dis­tribute the much higher pro­duc­tive ca­pac­ity well, we can have an ap­prox­i­mately strongly Pareto-preferred world, a world that looks pretty damn good to pretty much ev­ery­one.

Note: for a more thor­ough pre­sen­ta­tion on this topic, see Eric Drexler’s other talk from this same con­fer­ence.

Ques­tion: What do you think the great­est AI threat to so­ciety in the next 10, 20 years would be?

Eric: I think the great­est threat is in­sta­bil­ity. Sort of ei­ther or­ganic in­sta­bil­ity from AI tech­nolo­gies be­ing diffused and hav­ing more and more of the eco­nomic re­la­tion­ships and other in­for­ma­tion-flow re­la­tion­ships among peo­ple be trans­formed in di­rec­tions that in­crease en­tropy, gen­er­ate con­flict, desta­bi­lize poli­ti­cal in­sti­tu­tions. Who knows? If you had the in­ter­net and peo­ple were putting out pro­pa­ganda that was AI-en­abled, it’s con­ceiv­able that you could move elec­tions in crazy di­rec­tions in the in­ter­est of ei­ther good ac­tors or bad ac­tors. Well, which will that be? I think we will see efforts made to do that. What kinds of counter-pres­sures could be ap­plied to bad ac­tors us­ing lin­guis­ti­cally poli­ti­cally-com­pe­tent AI sys­tems to do mes­sag­ing? And, of course, there’s the peren­nial states en­gag­ing in an arms race which could tip into some un­sta­ble situ­a­tion and lead to a war. In­clud­ing the long-post­poned nu­clear war that peo­ple are wait­ing for and might, in fact, turn up some day. And so I pri­mar­ily worry about in­sta­bil­ity. Some of the modes of in­sta­bil­ity are be­cause some ac­tor de­cides to do some­thing like turn loose a com­pe­tent hack­ing, re­in­force­ment-learn­ing sys­tem that goes out there and does hor­rible things to global com­pu­ta­tional in­fras­truc­ture that ei­ther do or don’t serve the in­ten­tions of the par­ties that re­leased it. But take a world that’s in­creas­ingly de­pen­dent on com­pu­ta­tional in­fras­truc­ture and just slice through that, in some hor­ribly desta­bi­liz­ing way. So those are some of the sce­nar­ios I worry about most.

Ques­tion: And then maybe longer term than 10, 20 years? If the world isn’t over by then?

Eric: Well, I think all of our think­ing should be con­di­tioned on that. If one is think­ing about the longer term, one should as­sume that we are go­ing to have su­per­in­tel­li­gent-level gen­eral AI ca­pa­bil­ities. Let’s define that as the longer term in this con­text. And, if we’re con­cerned with what to do with them, that means that we’ve got­ten through the pro­cess to there then. So there’s two ques­tions. One is, “What do we need to do to sur­vive or have an out­come that’s a work­able con­text for solv­ing more prob­lems?” And the other one is what to do. So, if we’re con­cerned with what to do, we need to as­sume solu­tions to the pre­ced­ing prob­lems. And that means high-level su­per­in­tel­li­gent ser­vices. That prob­a­bly means mechanisms for sta­bi­liz­ing com­pe­ti­tion. There’s a do­main there that in­volves turn­ing surveillance into some­thing that’s ac­tu­ally at­trac­tive and be­nign. And the prob­lems down­stream, there­fore, one hopes to have largely solved. At least the clas­sic large prob­lems and now prob­lems that arise are prob­lems of, “What is the world about any­way?” We’re hu­man be­ings in a world of su­per­in­tel­li­gent sys­tems. Is trans-hu­man­ism in this di­rec­tion? Upload­ing in this di­rec­tion? Devel­op­ing moral pa­tients, su­per­in­tel­li­gent-level en­tities that re­ally aren’t just ser­vices, and are in­stead the moral equiv­a­lent of peo­ple? What do you do with the cos­mos? It’s an enor­mously com­plex prob­lem. And, from the point of view of hav­ing good out­comes, what can I say? There are prob­lems.

Ques­tion: So what can we do to im­prove di­ver­sity in the AI sec­tor? And what are the likely risks of not do­ing so?

Eric: Well, I don’t know. My sense is that what is most im­por­tant is hav­ing the in­ter­ests of a wide range of groups be well rep­re­sented. To some ex­tent, ob­vi­ously, that’s helped if you have in the de­vel­op­ment pro­cess, in the cor­po­ra­tions peo­ple who have these di­verse con­cerns. To some ex­tent it’s a mat­ter of poli­tics reg­u­la­tion, cul­tural norms, and so on. I think that’s a di­rec­tion we need to push in. To put this in the Pare­to­topian frame­work, your aim is to have ob­jec­tives, goals that re­ally are al­igned, so, pos­si­ble fu­tures that are strongly goal-al­ign­ing for many differ­ent groups. For many of those groups, we won’t fully un­der­stand them from a dis­tance. So we need to have some joint pro­cess that pro­duces an in­te­grated, ad­justed pic­ture of, for ex­am­ple, how do we have EAs be happy and have billion­aires main­tain their rel­a­tive po­si­tion? Be­cause if you don’t do that they’re go­ing to maybe op­pose what you’re do­ing, and the point is to avoid se­ri­ous op­po­si­tion. And also have the gov­ern­ment of China be happy. And I would like to see the poor in ru­ral Africa be much bet­ter off, too. Billion­aires might be way up here, com­pet­ing not to build or­bital ve­hi­cles but in­stead star­ships. And the poor in ru­ral Africa of to­day merely have or­bital space ca­pa­bil­ities con­ve­nient for fam­i­lies, be­cause they’re poor. Nearly ev­ery­one much, much bet­ter off.

No comments.