It’s great to see people thinking about these topics and I agree with many of the sentiments in this post. Now I’m going to write a long comment focusing on those aspects I disagree with. (I think I probably agree with more of this sentiment than most of the people working on alignment, and so I may be unusually happy to shrug off these criticisms.)
Contrasting “multi-agent outcomes” and “superintelligence” seems extremely strange. I think the default expectation is a world full of many superintelligent systems. I’m going to read your use of “superintelligence” as “the emergence of a singleton concurrently with the development of superintelligence.”
I don’t consider the “single superintelligence” scenario likely, but I don’t think that has much effect on the importance of AI alignment research or on the validity of the standard arguments. I do think that the world will gradually move towards being increasingly well-coordinated (and so talking about the world as a single entity will become increasingly reasonable), but I think that we will probably build superintelligent systems long before that process runs its course.
The future looks broadly good in this scenario given approximately utilitarian values and the assumption that ems are conscious, with a large growing population of minds which are optimized for satisfaction and productivity, free of disease and sickness.
On total utilitarian values, the actual experiences of brain emulations (including whether they have any experiences) don’t seem very important. What matters are the preferences according to which emulations shape future generations (which will be many orders of magnitude larger).
“freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about”
Evolution doesn’t really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.
(Evolution might select for particular values, e.g. if it’s impossible to reliably delegate or if it’s very expensive to build systems with stable values. But (a) I’d bet against this, and (b) understanding this phenomenon is precisely the alignment problem!)
(I discuss several of these issues here, Carl discusses evolution here.)
Whatever the type of agent, arms races in future technologies would lead to opportunity costs in military expenditures and would interfere with the project of improving welfare. It seems likely that agents designed for security purposes would have preferences and characteristics which fail to optimize for the welfare of themselves and their neighbors. It’s also possible that an arms race would destabilize international systems and act as a catalyst for warfare.
It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it. If there weren’t competitive pressure / selection pressure to adopt future AI systems, then alignment would be much less urgent since we could just take our time.
There may be other interventions that improve coordination/peace more broadly, or which improve coordination/peace in particular possible worlds etc., and those should be considered on their merits. It seems totally plausible that some of those projects will be more effective than work on alignment. I’m especially sympathetic to your first suggestion of addressing key questions about what will/could/should happen.
Not only is this a problem on its own, but I see no reason to think that the conditions described above wouldn’t apply for scenarios where AI agents turned out to be the primary actors and decisionmakers rather than transhumans or posthumans.
Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others’ situations, or to understand what we would believe if we viewed others’ private information.
More generally, we would like to avoid destructive conflict and are continuously developing new tools for getting what we want / becoming smarter and better-informed / etc.
And on top of all that, the historical trend seems to basically point to lower and lower levels of violent conflict, though this is in a race with greater and greater technological capacity to destroy stuff.
I would be more than happy to bet that the intensity of conflict declines over the long run. I think the question is just how much we should prioritize pushing it down in the short run.
“the only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.”
I disagree with this. See my earlier claim that evolution only favors patience.
I do agree that some kinds of coordination problems need to be solved, for example we must avoid blowing up the world. These are similar in kind to the coordination problems we confront today though they will continue to get harder and we will have to be able to solve them better over time—we can’t have a cold war each century with increasingly powerful technology.
There is still value in AI safety work… but there are other parts of the picture which need to be explored
This conclusion seems safe, but it would be safe even if you thought that early AI systems will precipitate a singleton (since one still cares a great deal about the dynamics of that transition).
Better systems of machine ethics which don’t require superintelligence to be implemented (as coherent extrapolated volition does)
By “don’t require superintelligence to be implemented,” do you mean systems of machine ethics that will work even while machines are broadly human level? That will work even if we need to solve alignment prior long before the emergence of a singleton? I’d endorse both of those desiderata.
I think the main difference in alignment work for unipolar vs. multipolar scenarios is how high we draw the bar for “aligned AI,” and in particular how closely competitive it must be with unaligned AI. I probably agree with your implicit claim, that they either must be closely competitive or we need new institutional arrangements to avoid trouble.
Rather than having a singleminded focus on averting a particular failure mode
I think the mandate of AI alignment easily covers the failure modes you have in mind here. I think most of the disagreement is about what kinds of considerations will shape the values of future civilizations.
both working on arguments that agents will be linked via a teleological thread where they accurately represent the value functions of their ancestors
At this level of abstraction I don’t see how this differs from alignment. I suspect the details differ a lot, in that the alignment community is very focused on the engineering problem of actually building systems that faithfully pursue particular values (and in general I’ve found that terms like “teleological thread” tend to be linked with persistently low levels of precision).
Evolution doesn’t really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.
Evolution favors replication. But patience and resource acquisition aren’t obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn’t that evolution is intrinsically “against” any particular value, it’s that it’s extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic. Furthermore, competitive dynamics lead to systematic failures. See the citation.
Shulman’s post assumes that once somewhere is settled, it’s permanently inhabited by the same tribe. But I don’t buy that. Agents can still spread through violence or through mimicry (remember the quote on fifth-generation warfare).
It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it.
All I am saying is that the argument applies to this issue as well.
Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others’ situations, or to understand what we would believe if we viewed others’ private information.
The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict. Better technology yields better monitoring, but also better hiding—which is easier, monitoring ICBMs in the 1970′s or monitoring cyberweapons today?
One of the most critical pieces of information in these cases is intentions, which are easy to keep secret and will probably remain so for a long time.
By “don’t require superintelligence to be implemented,” do you mean systems of machine ethics that will work even while machines are broadly human level?
Yes, or even implementable in current systems.
I think the mandate of AI alignment easily covers the failure modes you have in mind here.
The failure modes here are a different context where the existing research is often less relevant or not relevant at all. Whatever you put under the umbrella of alignment, there is a difference between looking at a particular system with the assumption that it will rebuild the universe in accordance with its value function, and looking at how systems interact in varying numbers. If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn’t very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics. Figuring out how to optimize large systems of agents is absolutely not a simple matter of figuring out how to build one good agent and then replicating it as much as possible.
If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn’t very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics
I don’t think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with “well that research was silly anyway.”)
Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which “multi-agent dynamics” do you think change the technical situation?
the claim isn’t that evolution is intrinsically “against” any particular value, it’s that it’s extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic
If evolution isn’t optimizing for anything, then you are left with the agents’ optimization, which is precisely what we wanted. I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where “anyone who wants to destroy the world has the option,” as is the security dilemma, and so forth.)
Yes, or even implementable in current systems.
We are probably on the same page here. We should figure out how to build AI systems so that they do what we want, and we should start implementing those ideas ASAP (and they should be the kind of ideas for which that makes sense). When trying to figure out whether a system will “do what we want” we should imagine it operating in a world filled with massive numbers of interacting AI systems all built by people with different interests (much like the world is today, but more).
The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict.
You’re right.
Unsurprisingly, I have a similar view about the security dilemma (e.g. think about automated arms inspections and treaty enforcement, I don’t think the effects of technological progress are at all symmetrical in general). But if someone has a proposed intervention to improve international relations, I’m all for evaluating it on its merits. So maybe we are in agreement here.
I don’t think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with “well that research was silly anyway.”)
Parenthesis is probably true, e.g. most of MIRI’s traditional agenda. If agents don’t quickly gain decisive strategic advantages then you don’t have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it’s just not very useful to do research on them now. Not familiar enough with your line of work to comment on it, but just think about the degree to which a problem would no longer be a problem if you can build, test and interact with many prototype human-level and smarter-than-human agents.
Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which “multi-agent dynamics” do you think change the technical situation?
Aside from the ability to prototype as described above, there are the same dynamics which plague human society when multiple factions with good intentions end up fighting due to security concerns or tragedies of the commons, or when multiple agents with different priors interpret every new piece of evidence they see differently and so go down intractably separate paths of disagreement. FAI can solve all the problems of class, politics, economics, etc by telling everyone what to do, for better or for worse. But multiagent systems will only be stable with strong institutions, unless they have some other kind of cooperative architecture (such as universal agreement in value functions, in which case you now have the problem of controlling everybody’s AIs but without the benefit of having an FAI to rule the world). Building these institutions and cooperative structures may have to be done right the first time, since they are effectively singletons, and they may be less corrigible or require different kinds of mechanisms to ensure corrigibility. And the dynamics of multiagent systems means you cannot accurately predict the long term future merely based on value alignment, which you would (at least naively) be able to do with a single FAI.
If evolution isn’t optimizing for anything, then you are left with the agents’ optimization, which is precisely what we wanted.
Well it leads to agents which are optimal replicators in their given environments. That’s not (necessarily) what we want.
I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where “anyone who wants to destroy the world has the option,” as is the security dilemma, and so forth.)
It’s great to see people thinking about these topics and I agree with many of the sentiments in this post. Now I’m going to write a long comment focusing on those aspects I disagree with. (I think I probably agree with more of this sentiment than most of the people working on alignment, and so I may be unusually happy to shrug off these criticisms.)
Contrasting “multi-agent outcomes” and “superintelligence” seems extremely strange. I think the default expectation is a world full of many superintelligent systems. I’m going to read your use of “superintelligence” as “the emergence of a singleton concurrently with the development of superintelligence.”
I don’t consider the “single superintelligence” scenario likely, but I don’t think that has much effect on the importance of AI alignment research or on the validity of the standard arguments. I do think that the world will gradually move towards being increasingly well-coordinated (and so talking about the world as a single entity will become increasingly reasonable), but I think that we will probably build superintelligent systems long before that process runs its course.
On total utilitarian values, the actual experiences of brain emulations (including whether they have any experiences) don’t seem very important. What matters are the preferences according to which emulations shape future generations (which will be many orders of magnitude larger).
Evolution doesn’t really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.
(Evolution might select for particular values, e.g. if it’s impossible to reliably delegate or if it’s very expensive to build systems with stable values. But (a) I’d bet against this, and (b) understanding this phenomenon is precisely the alignment problem!)
(I discuss several of these issues here, Carl discusses evolution here.)
It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it. If there weren’t competitive pressure / selection pressure to adopt future AI systems, then alignment would be much less urgent since we could just take our time.
There may be other interventions that improve coordination/peace more broadly, or which improve coordination/peace in particular possible worlds etc., and those should be considered on their merits. It seems totally plausible that some of those projects will be more effective than work on alignment. I’m especially sympathetic to your first suggestion of addressing key questions about what will/could/should happen.
Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others’ situations, or to understand what we would believe if we viewed others’ private information.
More generally, we would like to avoid destructive conflict and are continuously developing new tools for getting what we want / becoming smarter and better-informed / etc.
And on top of all that, the historical trend seems to basically point to lower and lower levels of violent conflict, though this is in a race with greater and greater technological capacity to destroy stuff.
I would be more than happy to bet that the intensity of conflict declines over the long run. I think the question is just how much we should prioritize pushing it down in the short run.
I disagree with this. See my earlier claim that evolution only favors patience.
I do agree that some kinds of coordination problems need to be solved, for example we must avoid blowing up the world. These are similar in kind to the coordination problems we confront today though they will continue to get harder and we will have to be able to solve them better over time—we can’t have a cold war each century with increasingly powerful technology.
This conclusion seems safe, but it would be safe even if you thought that early AI systems will precipitate a singleton (since one still cares a great deal about the dynamics of that transition).
By “don’t require superintelligence to be implemented,” do you mean systems of machine ethics that will work even while machines are broadly human level? That will work even if we need to solve alignment prior long before the emergence of a singleton? I’d endorse both of those desiderata.
I think the main difference in alignment work for unipolar vs. multipolar scenarios is how high we draw the bar for “aligned AI,” and in particular how closely competitive it must be with unaligned AI. I probably agree with your implicit claim, that they either must be closely competitive or we need new institutional arrangements to avoid trouble.
I think the mandate of AI alignment easily covers the failure modes you have in mind here. I think most of the disagreement is about what kinds of considerations will shape the values of future civilizations.
At this level of abstraction I don’t see how this differs from alignment. I suspect the details differ a lot, in that the alignment community is very focused on the engineering problem of actually building systems that faithfully pursue particular values (and in general I’ve found that terms like “teleological thread” tend to be linked with persistently low levels of precision).
Thanks for the comments.
Evolution favors replication. But patience and resource acquisition aren’t obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn’t that evolution is intrinsically “against” any particular value, it’s that it’s extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic. Furthermore, competitive dynamics lead to systematic failures. See the citation.
Shulman’s post assumes that once somewhere is settled, it’s permanently inhabited by the same tribe. But I don’t buy that. Agents can still spread through violence or through mimicry (remember the quote on fifth-generation warfare).
All I am saying is that the argument applies to this issue as well.
The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict. Better technology yields better monitoring, but also better hiding—which is easier, monitoring ICBMs in the 1970′s or monitoring cyberweapons today?
One of the most critical pieces of information in these cases is intentions, which are easy to keep secret and will probably remain so for a long time.
Yes, or even implementable in current systems.
The failure modes here are a different context where the existing research is often less relevant or not relevant at all. Whatever you put under the umbrella of alignment, there is a difference between looking at a particular system with the assumption that it will rebuild the universe in accordance with its value function, and looking at how systems interact in varying numbers. If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn’t very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics. Figuring out how to optimize large systems of agents is absolutely not a simple matter of figuring out how to build one good agent and then replicating it as much as possible.
I don’t think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with “well that research was silly anyway.”)
Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which “multi-agent dynamics” do you think change the technical situation?
If evolution isn’t optimizing for anything, then you are left with the agents’ optimization, which is precisely what we wanted. I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where “anyone who wants to destroy the world has the option,” as is the security dilemma, and so forth.)
We are probably on the same page here. We should figure out how to build AI systems so that they do what we want, and we should start implementing those ideas ASAP (and they should be the kind of ideas for which that makes sense). When trying to figure out whether a system will “do what we want” we should imagine it operating in a world filled with massive numbers of interacting AI systems all built by people with different interests (much like the world is today, but more).
You’re right.
Unsurprisingly, I have a similar view about the security dilemma (e.g. think about automated arms inspections and treaty enforcement, I don’t think the effects of technological progress are at all symmetrical in general). But if someone has a proposed intervention to improve international relations, I’m all for evaluating it on its merits. So maybe we are in agreement here.
Parenthesis is probably true, e.g. most of MIRI’s traditional agenda. If agents don’t quickly gain decisive strategic advantages then you don’t have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it’s just not very useful to do research on them now. Not familiar enough with your line of work to comment on it, but just think about the degree to which a problem would no longer be a problem if you can build, test and interact with many prototype human-level and smarter-than-human agents.
Aside from the ability to prototype as described above, there are the same dynamics which plague human society when multiple factions with good intentions end up fighting due to security concerns or tragedies of the commons, or when multiple agents with different priors interpret every new piece of evidence they see differently and so go down intractably separate paths of disagreement. FAI can solve all the problems of class, politics, economics, etc by telling everyone what to do, for better or for worse. But multiagent systems will only be stable with strong institutions, unless they have some other kind of cooperative architecture (such as universal agreement in value functions, in which case you now have the problem of controlling everybody’s AIs but without the benefit of having an FAI to rule the world). Building these institutions and cooperative structures may have to be done right the first time, since they are effectively singletons, and they may be less corrigible or require different kinds of mechanisms to ensure corrigibility. And the dynamics of multiagent systems means you cannot accurately predict the long term future merely based on value alignment, which you would (at least naively) be able to do with a single FAI.
Well it leads to agents which are optimal replicators in their given environments. That’s not (necessarily) what we want.
That too!