Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I think that the evidence you cite for “careening towards Venezuela” being a significant risk comes nowhere near to showing that, and that as someone with a lot of sway in the community you’re being epistemically irresponsible in suggesting otherwise.
Of the links you cite as evidence:
The first is about the rate of advance slowing, which is not a collapse or regression scenario. At most it could contribute to such a scenario if we had reason to think one was otherwise likely.
The second is describing an all-ready existing phenomenon of cost disease which while concerning has been compatible with high rates of growth and progress over the past 200 years.
The third is just a blog post about how some definitions of “democratic” are theoretically totalitarian in principle, and contains 0 argument (even bad) that totalitarianism risk is high, or rising, or will become high.
The fourth is mostly just a piece that takes for granted that some powerful American liberals and some fraction of American liberals like to shut down dissenting opinion, and then discusses inconclusively how much this will continue and what can be done about it. But this seem obviously insufficient to cause the collapse of society, given that, as you admit, periods of liberalism where you could mostly say what you like without being cancelled have been the exception not the rule over the past 200 years, and yet growth and progress have occurred. Not to mention that they have also occurred in places like the Soviet Union, or China from the early 1980s onward, that have been pretty intolerant of ideological dissent.
The fifth is a highly abstract and inconclusive discussion of the possibility that having a bunch of governments that grow/shrink in power as their policies are successful/unsuccessful, might produce better policies than an (assumed) status quo where this doesn’t happen*, combined with a discussion of the connection of this idea to an obscure far-right wing Bay Area movement of at most a few thousand people. It doesn’t actually argue for the idea that dangerous popular ideas will eventually cause civilization regression at all; it’s mostly about what would follow if popular ideas tended to be bad in some general sense, and you could get better ideas by having a “free market for governments” where only successful govs survived.
The last link on dysgenics and fertility collapse largely consist of you arguing that these are not as threatening as some people believe(!). In particular, you argue that world population will still be slightly growing by 2100 and it’s just really hard to project current trends beyond then. And you argue that dysgenic trends are real but will only cause a very small reduction in average IQ, even absent a further Flynn effect (and “absent a further Flynn effect” strikes me as unlikely if we are talking about world IQ, and not US.) Nowhere does it argue these things will be bad enough to send progress into reverse.
This is an incredibly slender basis to be worrying about the idea that the general trend towards growth and progress of the last 200 years will reverse absent one particular transformative technology.
*It plausibly does happen to some degree. The US won the Cold War partly because it had better economic policies than the Soviet Union.
I want to add further that cost disease is not only compatible with economic growth, cost disease itself is a result of economic growth, at least in the usual sense of the word. The Baumol effect—which is what people usually mean when they say cost disease—is simply a side effect of some industries becoming more productive more quickly than others. Essentially the only way to avoid cost disease is to have uniform growth across all industries, and that’s basically never happened historically, except during times of total stagnation (in which growth is ~0% in every industry).
Thanks for writing this up, I was skeptical about Scott‘s strong take but didn’t take the time to check the links he provided as proof.
I think this is a good and useful post in many ways, in particular laying out a partial taxonomy of differing pause proposals and gesturing at their grounding and assumptions. What follows is a mildly heated response I had a few days ago, whose heatedness I don’t necessarily endorse but whose content seems important to me.
Sadly this letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. I recently wrote an admonishment to CAIS for something similar.
There are literal misanthropic ‘effective accelerationists’ in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. There’s Facebook/Meta, whose leaders and executives have been publicly pooh-poohing discussion of AI-related risks as pseudoscience for years, and whose actual motto is ‘move fast and break things’. There’s OpenAI, which with great trumpeting announces its ‘Superalignment’ strategy without apparently pausing to think, ‘But what if we can’t align AGI in 5 years?‘. We don’t need to invoke bogeyman ‘China’ to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or orgs like Facebook/Meta.
Now, this was in the context of paraphrases of others’ positions on a pause in AI development, so it’s at least slightly mention-flavoured (as opposed to use). But as far as I can tell, the precise framing here has been introduced in Scott’s retelling.
Whoever introduced this formulation, this is bonkers in at least two ways. First, who is ‘the West’ and who is ‘China’? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn’t have to point this out, so this is a little bewildering to read. Let me reiterate: governments are not currently pursuing advanced AI development, only companies. The companies are somewhat international, mainly headquartered in the US and UK but also to some extent China and EU, and the governments have thus far been unwitting passengers with respect to the outcomes. Of course, these things can change.
Second, actually think about the hypothetical where ‘we’[1] are ‘on the verge of creating dangerous AI’. For sufficient ‘dangerous’, the only winning option for humanity is to take the steps we can to prevent, or at least delay[2], that thing coming into being. This includes advocacy, diplomacy, ‘aggressive diplomacy’ and so on. I put forward that the right length of pause then is ‘at least as long as it takes to make the thing not dangerous’. You don’t win by capturing the dubious accolade of nominally belonging to the bloc which directly destroys everything! To be clear, I think Scott and I agree that ‘dangerous AI’ here is shorthand for, ‘AI that could defeat/destroy/disempower all humans in something comparable to an extinction event’. We already have weak AI which is dangerous to lesser levels. Of course, if ‘dangerous’ is more qualified, then we can talk about the tradeoffs of risking destroying everything vs ‘us’ winning a supposed race with ‘them’.
I’m increasingly running with the hypothesis that many anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn’t the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don’t know.
I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments—I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around ‘company/enterprise’ that they don’t have around ‘government/regulation’.
[ In my post about this I outline some other possible cruxes and I’d love to hear takes on these ]
Separately, I’ve got increasingly close to the frontier of AI research and AI safety research, and the challenge of ensuring these systems are safe remains very daunting. I think some policy/people-minded discussions are missing this rather crucial observation. If you expect it to be easy (and expect others to expect that) to control AGI, I can see more why people would frame things around power struggles and racing. For this reason, I consider it worthwhile repeating: we don’t know how to ensure these systems will be safe, and there are some good reasons to expect that they won’t be by default.
I repeat that the post as a whole is doing a service and I’m excited to see more contributions to the conversation around pause and differential development and so on.
Who, me? You? No! Some development team at DeepMind or OpenAI, presumably, or one of the current small gaggle of other contenders, or a yet-to-be-founded lab.
If it comes to it, extinction an hour later is better than an hour sooner.
“eventually technology will advance to the point where you can train an AI on anything”
Assuming this means AGI, this is a very strong claim that doesn’t get any justification. It may be theoretically true if “eventually” means “within 100 billion years”, but it’s not obvious to me that this will be true on more practical time scales (10-300 years).
“Fourth, there are many arguments that a pause would be impossible, but they mostly don’t argue against trying.”
I think this is a really important point
I think that we can get that much wider adoption of x-risk arguments (indeed we are already seeing it), and a taboo on AGI / superhuman AI to go along with it, which will go a long way toward making enforcement of frontier model training run caps manageable.
Thanks for sharing, Scott! For reference, your post had already been linkposted, but it may be fine to have the whole post here as well. I think it makes sense to contact the author before linkposting.
(I suggested to Scott that he do this crosspost. I think it was nice of David to do the link post, but I like having the full text available on the forum, and under the original author’s name.)
I’d like to see more fleshed out reasoning on where this number is coming from. Is it based on an aggregate of expert views from people you trust? Or is there an actual gears-level mechanism for why there is non-doom over ~80% of future worlds with AGI? (Also, 20% is more than enough to be shouting “fucking stop[!]”...)
Also would be good to see more justification for this! As per Dr. David Mathers’ comment below. (And also: “Find some other route to the glorious transhuman future[!]”)
Good that you don’t support AI accelerationism, but I remain unconvinced by the reasoning for having carefully-tailored pauses. It seems far too risky to me.
I’m curating this post. This is a well-written summary of the AI Pause Debate, and I’m excited for our community to build on that conversation, through distillation and more back-and-forth.
I’m curious why Zach thinks that it would be ideal for leading AI labs to be in the US. I tried to consider this from the lens of regulation. I haven’t read extensively on comparisons of what regulations there are for AI in various countries, but my impression is that the US federal government is sitting on their laurels with respect to regulation of AI, although state and municipal governments provide a somewhat different picture, and whilst the intentions of each are different, the EU and the UK have been moving much more swiftly than the US government.
My opinion would change if regulation doesn’t play a large role in how successful an AI pause is, eg if industry players could voluntarily practice restraint. There are also other factors that I’m not considering.
Climate change is wrecking the planet, Putin is trying to start World War Three and the middle east is turning into a blood bath. Mean while some people hide from reality and worry about a perceived threat from the latest tools that humanity has invented.
Is their intelligent life on earth? I see little evidence to support that argument.
What is wrong with the reasoning here? Yes there’s a lot of things wrong with the world, but the extinction (total—no survivors) we’re actually likely to get is from AI, this decade, unless we do something to stop it.
Thanks for for reply. The only threat to humanity comes from humanity. AI, like any other tool such as atomic weapons or dynamite will be used for good or bad by humans. AI is powerful and because its a new technology, it’s impact on the future debatable, but this has been the case ever since humans invented flint tools.
I say the fundamental problem is how to steer humanity away from improper use of technology, which can be achieved by first understanding human behaviors and motivations, then by the widespread dispersion of this knowledge and finally by exposing the futility of such behaviour in our globalised, interconnected and interdependent society.
If the end of humanity does happen, it will not be due to AI, pandemics or atomic weapons, it will because one group of humans, decided it wanted to get an advantage over another group of humans and ignored all other considerations. Understand why and we may be able to find a solution.
No. AGI is different. It will have it’s own goals and agency. It’s more akin to a new alien species than a “tool”. What we are facing here is basically better thought of as a (digital) alien invasion, facilitated (or at the last accidentally unleashed) by the big AI companies. Less intelligent species don’t typically fare well when faced with competing more intelligent species.
‘No. AGI is different. It will have it’s own goals and agency.’ Only if we choose to build it that way: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf (Though Bengio was correct when he pointed out that even if lots of people build safer tools, that doesn’t stop a more reckless person building an agent instead.)
People are very much choosing to build it that way unfortunately!
Thanks Greg, I’m sixty years old and grew up when every one said the world was going to be destroyed in a thermonuclear war, then it was acid rain, then it was nano technology (covering the world in a layer of scum!), then it was the millennium bug, currently its climate change and it looks like people are starting to worry about AI. Even the Prime minister is at it, perhaps as a cover for his failed short term policies. Humans are fundamentally neurotic—perhaps it gives us an evolutionary edge, always being on the lookout for new threats, but if you step back and take an overview of humanity, maybe you will see what the real problems are.
However, my point is, take care of today (with an eye on the mid term), the current problems and the future will look after itself. Who can predict the future with any degree of certainty anyway, so why worry? Its correct that long term thinking is needed to tackle climate change, but not problems like Palestine / Israel or Putin’s and Xi Jinping’s ideology that threatens Europe and Asia or Trumps attack on democracy, all of which are trying to drag us back to repeat past failures. Long term thinking should not be used to avoid tackling short term problems.
From what I’ve read of science, biology, neurology, psychology, politics, economics, history, philosophy we are on the verge of a breakthrough in new thought and maybe because AI can pull vast pools of knowledge together and perhaps eliminate our biases and prejudices, bring about great change for the better. This is not something to be afraid of, but something to embrace, but of course caution is needed and a simple fail safe button should be built in if we don’t like the outputs.
Thanks for reading.
Regards and good luck with your endeavours. Never stop learning, but keep it real.
Unfortunately it’s no longer a long term problem, it’s 0-5 years away. Very much short term!