I want to second what Czynski said about pure propaganda. Insofar as we believe that the constraints you are imposing are artificial and unrealistic, doesn’t this contest fall into the “pure propaganda” category? I would be enthusiastically in favor of this contest if there weren’t such unrealistic constraints. Or do you think the constraints are actually realistic after all?
I think it’s fine if we have broad leeway to interpret the constraints as we see fit. E.g. “Technology is improving rapidly because, while the AGI already has mature technology, humans have requested that advanced technology be slowly doled out so as not to give us too much shock. So technology actually used by humans is improving rapidly, even though the cutting-edge stuff used by AGI has stagnated. Meanwhile, while the US, EU, and China have no real power (real power lies with the AGI) the AGI follow the wishes of humans and humans still want the US, EU, and China to be important somehow so lots of decisions are delegated to those entities. Also, humans are gradually starting to realize that if you are delegating decisions to old institutions you might as well do so to more institutions than the US, EU, and China, so increasingly decisions are being delegated to African and South American etc. governments rather than US, EU, and China. So in that sense a ‘balance of power between US, EU, and China has been maintained’ and ‘Africa et al are on the rise.’” Would you accept interpretations such as this?
To clarify, I’m not affiliated with FLI, so I’m not the one imposing the constraints, they are. I’m just defending them, because the contest rules seem reasonable enough to me. Here are a couple of thoughts:
Remember that my comment was drawing a distinction between “describing total Eutopia, a full and final state of human existence that might be strange beyond imagining” versus “describing a 2045 AGI scenario where things are looking positive and under-control and not too crazy”. I certainly agree with you that describing a totally transformed Eutopia where the USA and China still exist in exactly their current form is bizarre and contradictory. My point about Eutopia was just that an honest description of something indescribably strange should err towards trying to get across the general feeling (ie, it will be nice) rather than trying to scare people with the weirdness. (Imagine going back in time and horrifying the Founding Fathers by describing how in the present day “everyone sits in front of machines all day!! people eat packaged food from factories!!!”. Shocking the Founders like this seems misleading if the overall progress of science and technology is something they would ultimately be happy about.) Do you agree with that, or do you at least see what I’m saying?
Anyways, on to the more important issue of this actual contest, the 2045 AGI story, and its oddly-specific political requirements:
I agree with you that a positive AGI outcome that fits all these specific details is unlikely.
But I also think that the idea of AGI having a positive outcome at all seems unlikely—right now, if AGI happens, I’m mostly expecting paperclips!
Suppose I think AGI has a 70% chance of going paperclips, and a 30% chance of giving us any kind of positive outcome. Would it be unrealistic for me to write a story about the underdog 30% scenario in which we don’t all die horribly? No, I think that would be a perfectly fine thing to write about.
What if I was writing about a crazy-unlikely, 0.001% scenario? Then I’d be worried that my story might mislead people, by making it seem more likely than it really is. That’s definitely a fair criticism—for example, I might think it was immoral for someone to write a story about “The USA has a communist revolution, but against all odds and despite the many examples of history, few people are hurt and the new government works perfectly and never gets taken over by a bloodthirsty dictator and central planning finally works better than capitalism and it ushers in a new age of peace and prosperity for mankind!”.
But on the other hand, writing a very specific story is a good way to describe a goal that we are trying to hit, even if it’s unlikely. The business plan of every moonshot tech startup was once an unlikely and overly-specific sci-fi story. (“First we’re going to build the world’s first privately-made orbital rocket. Then we’re going to scale it up by 9x, and we’re going to fund it with NASA contracts for ISS cargo delivery. Once we’ve figured out reusability and dominated the world launch market, we’ll make an even bigger rocket, launch a money-printing satellite internet constellation, and use the profits to colonize Mars!”). Similarly, I would look much more kindly on a communist-revolution story if, instead of just fantasizing, it tried to plot out the most peaceful possible path to a new type of government that would really work—trying to tell the most realistic possible story under a set of unrealistic constraints that define our goal. (”..After the constitution had been fully reinterpreted by our revisionist supreme court justices—yes, I know that’ll be tough, but it seems to be the only way, please bear with me—we’ll use a Georgist land tax to fund public services, and citizens will contribute directly to decisionmaking via a cryptographically secured system of liquid democracy...”)
FLI is doing precisely this: choosing a set of unrealistic constraints that define a positive near-term path for civilization that most normal people (not just wild transhumanist LessWrongers) would be happy about. Chinese people wouldn’t be happy about a sci-fi future that incidentally involved a nuclear war in which their entire country was wiped off the map. Most people wouldn’t be happy if they heard that the world was going to be transformed beyond all recognition, with the economy doubling every two months as the world’s mountains and valleys are ripped up and converted to nanomachine supercomputers. Et cetera. FLI isn’t trying to choose something plausible—they’re just trying to choose a goal that everybody can agree on (peace, very fast but not bewilderingly fast economic growth, exciting new technologies to extend lifespan and make life better). Figuring out if there’s any plausible way to get there is our job. That’s the whole point of the contest.
You say: “[This scenario seems so unrealistic that I can only imagine it happening if we first align AGI and then request that it give us a slow ride even though it’s capable of going faster.] …Would you accept interpretations such as this?”
I’m not FLI so it’s not my job to say which interpretations are acceptable, but I’d say you’re already doing exactly the work FLI was looking for! I agree that this scenario is one of the most plausible ways that civilization might end up fulfilling the contest conditions. Here are some other possibilities:
Our AGI paradigm turns out to be really limited for some reason and it doesn’t scale well, so we get near-human AGI that does a lot to boost growth, but nothing really transformative. (It seems very unlikely to me that AGI capabilities would top out at such a convenient level, but who knows.)
Civilization is totally out of control and the alignment problem isn’t solved at all; we’re in the middle of “slow takeoff” towards paperclips by 2050, but the contest timeline ends in 2045 so all we see is things getting nicer and nicer as cool new technologies are invented, and not the horrifying treacherous turn where it all goes wrong. (This seems quite likely to me, but also seems to go strongly against the spirit of the question and would probably be judged harshly for lacking in the “aspirational” department.)
Who is in control of the AGI? Maybe it’s USA/China/EU all cooperating in a spirit of brotherhood to limit the pace of progress to something palatable and non-disorienting (the scenario you described). Or maybe it’s some kind of cabal of secret geniuses controlling things behind the scenes from their headquarters at Deepmind. If you’re more thinking that AGI will be developed all at once and via fast-takeoff (thus giving huge disproportionate power to the first inventors of aligned AGI), you might see the “cabal of secret geniuses” story as more plausible than the version where governments all come together to competently manage AI for the sake of humanity.
See my response to Czynski for more assorted thoughts, although I’ve written so much now that perhaps I could have entered the contest myself by now if I had been writing stories instead of comments! :P
Edited to add that alas, I only just now saw your other comment about “So in order to describe a good future, people will fiddle with the knobs of those important variables so that they are on their conducive-to-good settings rather than their most probable settings. ”. This strikes me as a fair criticism of the contest. (For one, it will bias people towards handwaving over the alignment problem by saying “it turned out to be surprisingly easy”.) I don’t think that’s devastating for the contest, since I think there’s a lot of value in just trying to envision what an agreeable good outcome for humanity looks like. But definitely a fair critique that lines up with the stuff I was saying above—basically, there are both pros and cons to putting $100K of optimization pressure behind getting people to figure out the most plausible optimistic outcome under a set of constraints. (Maybe FLI should run another contest encouraging people to do more Yudkowsky-style brainstorming of how everything could go horribly wrong before we even realize what we were dealing with, just to even things out!)
I want to second what Czynski said about pure propaganda. Insofar as we believe that the constraints you are imposing are artificial and unrealistic, doesn’t this contest fall into the “pure propaganda” category? I would be enthusiastically in favor of this contest if there weren’t such unrealistic constraints. Or do you think the constraints are actually realistic after all?
I think it’s fine if we have broad leeway to interpret the constraints as we see fit. E.g. “Technology is improving rapidly because, while the AGI already has mature technology, humans have requested that advanced technology be slowly doled out so as not to give us too much shock. So technology actually used by humans is improving rapidly, even though the cutting-edge stuff used by AGI has stagnated. Meanwhile, while the US, EU, and China have no real power (real power lies with the AGI) the AGI follow the wishes of humans and humans still want the US, EU, and China to be important somehow so lots of decisions are delegated to those entities. Also, humans are gradually starting to realize that if you are delegating decisions to old institutions you might as well do so to more institutions than the US, EU, and China, so increasingly decisions are being delegated to African and South American etc. governments rather than US, EU, and China. So in that sense a ‘balance of power between US, EU, and China has been maintained’ and ‘Africa et al are on the rise.’” Would you accept interpretations such as this?
To clarify, I’m not affiliated with FLI, so I’m not the one imposing the constraints, they are. I’m just defending them, because the contest rules seem reasonable enough to me. Here are a couple of thoughts:
Remember that my comment was drawing a distinction between “describing total Eutopia, a full and final state of human existence that might be strange beyond imagining” versus “describing a 2045 AGI scenario where things are looking positive and under-control and not too crazy”. I certainly agree with you that describing a totally transformed Eutopia where the USA and China still exist in exactly their current form is bizarre and contradictory. My point about Eutopia was just that an honest description of something indescribably strange should err towards trying to get across the general feeling (ie, it will be nice) rather than trying to scare people with the weirdness. (Imagine going back in time and horrifying the Founding Fathers by describing how in the present day “everyone sits in front of machines all day!! people eat packaged food from factories!!!”. Shocking the Founders like this seems misleading if the overall progress of science and technology is something they would ultimately be happy about.) Do you agree with that, or do you at least see what I’m saying?
Anyways, on to the more important issue of this actual contest, the 2045 AGI story, and its oddly-specific political requirements:
I agree with you that a positive AGI outcome that fits all these specific details is unlikely.
But I also think that the idea of AGI having a positive outcome at all seems unlikely—right now, if AGI happens, I’m mostly expecting paperclips!
Suppose I think AGI has a 70% chance of going paperclips, and a 30% chance of giving us any kind of positive outcome. Would it be unrealistic for me to write a story about the underdog 30% scenario in which we don’t all die horribly? No, I think that would be a perfectly fine thing to write about.
What if I was writing about a crazy-unlikely, 0.001% scenario? Then I’d be worried that my story might mislead people, by making it seem more likely than it really is. That’s definitely a fair criticism—for example, I might think it was immoral for someone to write a story about “The USA has a communist revolution, but against all odds and despite the many examples of history, few people are hurt and the new government works perfectly and never gets taken over by a bloodthirsty dictator and central planning finally works better than capitalism and it ushers in a new age of peace and prosperity for mankind!”.
But on the other hand, writing a very specific story is a good way to describe a goal that we are trying to hit, even if it’s unlikely. The business plan of every moonshot tech startup was once an unlikely and overly-specific sci-fi story. (“First we’re going to build the world’s first privately-made orbital rocket. Then we’re going to scale it up by 9x, and we’re going to fund it with NASA contracts for ISS cargo delivery. Once we’ve figured out reusability and dominated the world launch market, we’ll make an even bigger rocket, launch a money-printing satellite internet constellation, and use the profits to colonize Mars!”). Similarly, I would look much more kindly on a communist-revolution story if, instead of just fantasizing, it tried to plot out the most peaceful possible path to a new type of government that would really work—trying to tell the most realistic possible story under a set of unrealistic constraints that define our goal. (”..After the constitution had been fully reinterpreted by our revisionist supreme court justices—yes, I know that’ll be tough, but it seems to be the only way, please bear with me—we’ll use a Georgist land tax to fund public services, and citizens will contribute directly to decisionmaking via a cryptographically secured system of liquid democracy...”)
FLI is doing precisely this: choosing a set of unrealistic constraints that define a positive near-term path for civilization that most normal people (not just wild transhumanist LessWrongers) would be happy about. Chinese people wouldn’t be happy about a sci-fi future that incidentally involved a nuclear war in which their entire country was wiped off the map. Most people wouldn’t be happy if they heard that the world was going to be transformed beyond all recognition, with the economy doubling every two months as the world’s mountains and valleys are ripped up and converted to nanomachine supercomputers. Et cetera. FLI isn’t trying to choose something plausible—they’re just trying to choose a goal that everybody can agree on (peace, very fast but not bewilderingly fast economic growth, exciting new technologies to extend lifespan and make life better). Figuring out if there’s any plausible way to get there is our job. That’s the whole point of the contest.
You say: “[This scenario seems so unrealistic that I can only imagine it happening if we first align AGI and then request that it give us a slow ride even though it’s capable of going faster.] …Would you accept interpretations such as this?”
I’m not FLI so it’s not my job to say which interpretations are acceptable, but I’d say you’re already doing exactly the work FLI was looking for! I agree that this scenario is one of the most plausible ways that civilization might end up fulfilling the contest conditions. Here are some other possibilities:
Our AGI paradigm turns out to be really limited for some reason and it doesn’t scale well, so we get near-human AGI that does a lot to boost growth, but nothing really transformative. (It seems very unlikely to me that AGI capabilities would top out at such a convenient level, but who knows.)
Civilization is totally out of control and the alignment problem isn’t solved at all; we’re in the middle of “slow takeoff” towards paperclips by 2050, but the contest timeline ends in 2045 so all we see is things getting nicer and nicer as cool new technologies are invented, and not the horrifying treacherous turn where it all goes wrong. (This seems quite likely to me, but also seems to go strongly against the spirit of the question and would probably be judged harshly for lacking in the “aspirational” department.)
Who is in control of the AGI? Maybe it’s USA/China/EU all cooperating in a spirit of brotherhood to limit the pace of progress to something palatable and non-disorienting (the scenario you described). Or maybe it’s some kind of cabal of secret geniuses controlling things behind the scenes from their headquarters at Deepmind. If you’re more thinking that AGI will be developed all at once and via fast-takeoff (thus giving huge disproportionate power to the first inventors of aligned AGI), you might see the “cabal of secret geniuses” story as more plausible than the version where governments all come together to competently manage AI for the sake of humanity.
See my response to Czynski for more assorted thoughts, although I’ve written so much now that perhaps I could have entered the contest myself by now if I had been writing stories instead of comments! :P
Edited to add that alas, I only just now saw your other comment about “So in order to describe a good future, people will fiddle with the knobs of those important variables so that they are on their conducive-to-good settings rather than their most probable settings. ”. This strikes me as a fair criticism of the contest. (For one, it will bias people towards handwaving over the alignment problem by saying “it turned out to be surprisingly easy”.) I don’t think that’s devastating for the contest, since I think there’s a lot of value in just trying to envision what an agreeable good outcome for humanity looks like. But definitely a fair critique that lines up with the stuff I was saying above—basically, there are both pros and cons to putting $100K of optimization pressure behind getting people to figure out the most plausible optimistic outcome under a set of constraints. (Maybe FLI should run another contest encouraging people to do more Yudkowsky-style brainstorming of how everything could go horribly wrong before we even realize what we were dealing with, just to even things out!)
Thanks for this thoughtful and detailed response. I think we are basically on the same page now. I agree with your point about Eutopia vs. 2045.
Even if you don’t speak for FLI, I (at least somewhat) do, and agree with most of what you say here — thanks for taking the time and effort to say it!
I’ll also add that — again — we envisage this contest as just step 1 in a bigger program, which will include other sets of constraints.