This project will give people an unrealistically familiar and tame picture of the future. Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad.
This is harmful, not helpful; it is damaging, not improving, the risk messaging; endorsing any such view of the future is lying. At best it’s merely lying to the public—it runs a risk of a much worse outcome, lying to yourselves.
The ground rules present a very narrow target. Geopolitical constraints state that the world can’t substantially change in form of governance or degree of state power. AI may not trigger any world-shaking social change. AGI must exist for 5+ years without rendering the world unrecognizable. These constraints are (intentionally, I believe) incompatible with a hard takeoff AGI, but they also rule out any weaker form of recursive self-improvement. This essentially mandates a Hansonian view of AI progress.
I would summarize that view as
Everything is highly incremental
Progress in each AI-relevant field depends on incorporating insights from disparate fields
AI progress consists primarily of integrating skill-specialized modules
Many distinct sources develop such modules and peer projects borrow from each other
Immediately following the first AGI, many AGIs exist and are essentially peers in capability
AI progress is slow overall
This has multiple serious problems. One, it’s implausible in light of the nature of ML progress to date; most significant achievements have all come from a single source, DeepMind, and propagated outward from there. Two, it doesn’t lead to a future dominated by AGI—as Hanson explicitly extrapolated previously, it leads to an Age of Em, where uploads, not AGI, are the pivotal digital minds.
Which means that a proper prediction along these lines will fail at the stated criteria, because
Technology is advancing rapidly and AI is transforming the world sector by sector.
will not be true—AI will not be transformative here.
With all that in mind, I would encourage anyone making a submission to flout the ground rules and aim for a truly plausible world. This would necessarily break all three of
The US, the EU and China have managed a steady, if uneasy, power equilibrium.
India, Africa and South America are quickly on the ride as major players.
Despite ongoing challenges, there have been no major wars or other global catastrophes.
since those all require a geopolitical environment which is similar to the present day. It would probably also have to violate
Technology is advancing rapidly and AI is transforming the world sector by sector.
If we want a possible vision of the future, it must not look like that.
There’s obviously lots I disagree with here, but at bottom, I simply don’t think it’s the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.
“Necessarily entails singularity or catastrophe”, while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate “transforming the world sector by sector”. An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years in.
An AGI which has any ability to self-improve will not wait that long. It will be months, not years, and probably weeks, not months. A ‘soft’ takeoff would still be faster than five years. These rules mandate not a soft takeoff, but no takeoff at all.
That something is very unlikely doesn’t mean it’s unimaginable. The goal of imagining and exploring such unlikely scenarios is that with a positive vision we can at least attempt to make it more likely. Without a positive vision there are only catastrophic scenarios left. That’s I think the main motivation for FLI to organize this contest.
I agree, though, that the base assumptions stated in the contest make it hard to come up with a realistic image.
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge’s Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.
This project will give people an unrealistically familiar and tame picture of the future. Eutopia is Scary, and the most unrealistic view of the future is not the dystopia, nor the utopia, but the one which looks normal.[1] The contest ground rules requires, if not in so many words, that all submissions look normal. Anything which obeys these ground rules is wrong. Implausible, unattainable, dangerously misleading, bad overconfident reckless arrogant wrong bad.
This is harmful, not helpful; it is damaging, not improving, the risk messaging; endorsing any such view of the future is lying. At best it’s merely lying to the public—it runs a risk of a much worse outcome, lying to yourselves.
The ground rules present a very narrow target. Geopolitical constraints state that the world can’t substantially change in form of governance or degree of state power. AI may not trigger any world-shaking social change. AGI must exist for 5+ years without rendering the world unrecognizable. These constraints are (intentionally, I believe) incompatible with a hard takeoff AGI, but they also rule out any weaker form of recursive self-improvement. This essentially mandates a Hansonian view of AI progress.
I would summarize that view as
Everything is highly incremental
Progress in each AI-relevant field depends on incorporating insights from disparate fields
AI progress consists primarily of integrating skill-specialized modules
Many distinct sources develop such modules and peer projects borrow from each other
Immediately following the first AGI, many AGIs exist and are essentially peers in capability
AI progress is slow overall
This has multiple serious problems.
One, it’s implausible in light of the nature of ML progress to date; most significant achievements have all come from a single source, DeepMind, and propagated outward from there.
Two, it doesn’t lead to a future dominated by AGI—as Hanson explicitly extrapolated previously, it leads to an Age of Em, where uploads, not AGI, are the pivotal digital minds.
Which means that a proper prediction along these lines will fail at the stated criteria, because
will not be true—AI will not be transformative here.
With all that in mind, I would encourage anyone making a submission to flout the ground rules and aim for a truly plausible world. This would necessarily break all three of
The US, the EU and China have managed a steady, if uneasy, power equilibrium.
India, Africa and South America are quickly on the ride as major players.
Despite ongoing challenges, there have been no major wars or other global catastrophes.
since those all require a geopolitical environment which is similar to the present day. It would probably also have to violate
Technology is advancing rapidly and AI is transforming the world sector by sector.
If we want a possible vision of the future, it must not look like that.
I am quoting this from somewhere, probably the Sequences, but I cannot find the original source or wording.
There’s obviously lots I disagree with here, but at bottom, I simply don’t think it’s the case that economically transformative AI necessarily entails singularity or catastrophe within 5 years in any plausible world: there are lots of imaginable scenarios compatible with the ground rules set for this exercise, and I think assigning accurate probabilities amongst them and relative to others is very, very difficult.
“Necessarily entails singularity or catastrophe”, while definitely correct, is a substantially stronger statement than I made. To violate the stated terms of the contest, an AGI must only violate “transforming the world sector by sector”. An AGI would not transform things gradually and limited to specific portions of the economy. It would be broad-spectrum and immediate. There would be narrow sectors which were rendered immediately unrecognizable and virtually every sector would be transformed drastically by five years in, and almost certainly by two years in.
An AGI which has any ability to self-improve will not wait that long. It will be months, not years, and probably weeks, not months. A ‘soft’ takeoff would still be faster than five years. These rules mandate not a soft takeoff, but no takeoff at all.
That something is very unlikely doesn’t mean it’s unimaginable. The goal of imagining and exploring such unlikely scenarios is that with a positive vision we can at least attempt to make it more likely. Without a positive vision there are only catastrophic scenarios left. That’s I think the main motivation for FLI to organize this contest.
I agree, though, that the base assumptions stated in the contest make it hard to come up with a realistic image.
A positive vision which is false is a lie. No vision meeting the contest constraints is achievable, or even desirable as a post-AGI target. There might be some good fiction that comes out of this, but it will be as unrealistic as Vinge’s Zones of Thought setting. Using it in messaging would be at best dishonest, and, worse, probably self-deceptive.