Thoughts on short timelines

[Cross-posted from my website.]

Some rationalists and effective altruists have argued (1, 2, 3) that there is a non-negligible chance that artificial intelligence will attain human or super-human levels of general intelligence very soon.

In this post, I’d like to outline why I’m not convinced that this scenario has non-negligible probability. To clarify, I’m arguing against the hypothesis that “artificial general intelligence (AGI) is 10% likely to be built in the next 10 years”, where AGI is defined as the ability to successfully perform any intellectual task that a human is capable of. (My favoured definition of “AGI” is that autonomous intelligent machines contribute at least 50% to the global economy, as outlined here, but I don’t think the precise definition matters much for purposes of this post.)

The simplest counterargument is to look at the rate of progress we’re seeing so far and extrapolate from that. Have there been any ground-breaking results over the last years? I’m not talking about “normal” results of machine learning papers; I’m talking about milestones that constitute serious progress towards general intelligence. We are surely seeing progress in the former sense – I don’t mean to belittle the efforts of machine learning researchers. (An example of what that I’d consider “ground-breaking” is advanced transfer between different domains, e.g. playing many board or video games well after training on a single game.)

Some people considered AlphaGo (and later AlphaZero) ground-breaking in this sense. But this (the match against Lee Sedol) was in March 2016, so it’s already more than 2 years ago at the time of this writing (late 2018) – and it seems that there haven’t been comparable breakthroughs since then. (In my opinion, AlphaGo wasn’t that exceptional anyway – but that’s a topic for another post.)

Conditional on short timelines, I’d expect to observe ground-breaking progress all the time. So that seems to be evidence that this scenario is not materializing. In other words, it seems clear to me that the current rate of progress is not sufficient for AGI in 10 years. (See also Robin Hanson’s AI progress estimate.)

That said, we should distinguish between a) the belief that current rate of progress will lead to AGI within 10 years, and b) the belief that there will be significant acceleration at some point, which will enable AGI within 10 years. One could reject a) and still expect a scenario where AGI arrives within 10 years, but for some reason we won’t see impressive results until very near ‘the end’. In that case the lack of ground-breaking progress we see now isn’t (strong) evidence.

But why expect that? There’s an argument that progress will become discontinuous as soon as recursive self-improvement becomes possible. But we are talking about progress from the status quo to AGI, so that doesn’t apply: it seems implausible that artificial intelligences would vastly accelerate progress before they are highly intelligent themselves. (I’m not fully sold on that argument either, but that’s another story for another time.)

Given that significant resources have been invested in AI /​ ML for quite a while, it seems that discontinuous progress – on the path to AGI, not during or after the transition – would be at odds with usual patterns of technological progress. The reference class I’m thinking of is “improvement of a gradual attribute (like intelligence) of a technology over time, if significant resources are invested”. Examples that come to mind are the maximal speed of cars, which increased steadily over time, or perhaps computing power and memory space, which also progresses very smoothly.

(See also AI Impact’s discontinuous progress investigation. They actually consider new land speed records set by jet-propelled vehicles one of the few cases of (moderate) discontinuities that they’ve found so far. To me that doesn’t feel analogous in terms of the necessary magnitude of the discontinuity, though.)

The point is even stronger if “intelligence” is actually a collection of many distinct skills and abilities rather than a meaningful, unified property (in the context of machine intelligence). In that case it requires progress on many fronts, comparable to the “overall quality” of cars or computer hardware.

It’s possible that progress accelerates simply due to increased interest – and therefore increased funding and other resources – as more people recognise its potential. Indeed, while historical progress in AI was fairly smooth, there may have been some acceleration over the last decade, plausibly due to increased interest. So perhaps that could happen to an even larger degree in the future?

There is, however, already significant excitement (perhaps hype) around AI, so it seems unlikely to me that this could increase the rate of progress by orders of magnitude. In particular, if highly talented researchers are the main bottleneck, you can’t scale up the field by simply pouring more money into it. Plus, it has been argued that the next AI winter is well on its way, i.e. we actually start to see a decline, not a further increase, of interest in AI.

--

One of the most common reasons to nevertheless assign a non-negligible probability – say, 10% – is simply that we’re so clueless about what will happen in the future that we shouldn’t be confident either way, and should thus favor a broad distribution over timelines.

But are we actually that ignorant? It is indeed extremely hard, if not impossible, to predict the specific results of complex processes over long timespans – like, which memes and hashtags will be trending on Twitter in May 2038. However, the plausibility or implausibility of short timelines is not a question of this type since the development of AGI would be the result of a broad trend, not a specific result. We have reasonably strong forms of evidence at our disposal: we can look at historical and current rates of progress in AI, we can consider general patterns of innovation and technological progress, and we can estimate how hard general intelligence is (e.g. whether it’s an aggregation of many smart heuristics vs. a single insight).

Also, what kind of probability should an ignorant prior assign to AGI in 10 years? 10%? But then wouldn’t you assign 10% to advanced nanotechnology in 10 years because of ignorance? What about nuclear risk – we’re clueless about that too, so maybe 10% chance of a major nuclear catastrophe in the next 10 years? 10% on a complete breakdown of the global financial system? But if you keep doing that with more and more things, you’ll end up with near certainty of something crazy happening in the next 10 years, which seems wrong given historical base rates. So perhaps an ignorant prior should actually place much lower probability on each individual event.

--

But perhaps one’s own opinion shouldn’t count for much anyway, and we should instead defer to some set of experts? Unfortunately, interpreting expert opinion is tricky. On the one hand, in some surveys machine learning researchers put non-negligible probability on “human-level intelligence” (whatever that means) in 10 years. On the other hand, my impression from interacting with the community is that the predominant opinion is still to confidently dismiss a short timeline scenario, to the point of not even seriously engaging with it.

Alternatively, one could look at the opinions of smart people in the effective altruism community (“EA experts”), who tend to assign a non-negligible probability to short timelines. But this (vaguely defined) set of people is subject to a self-selection bias – if you think AGI is likely to happen soon, you’re much more likely to spend years thinking and talking about that – and has little external validation of their “expert” status.

A less obvious source of “expert opinion” are the financial markets – because market participants have a strong incentive to get things right – and their implicit opinion is to confidently dismiss the possibility of short timelines.

In any case, it’s not surprising if some people have wrong beliefs about this kind of question. Lots of people are wrong about lots of things. It’s not unusual that communities (like EA or the machine learning community) have idiosyncratic biases or suffer from groupthink. The question is whether more people buy into short timelines compared to what you’d expect conditional on short timelines being wrong (in which case some people will still buy into it, comparable to past AI hypes).

Similarly, do we see fewer or more people buy into short timelines compared to what you’d expect if short timelines are right (in which case there will surely be a few stubborn professors who won’t believe it until the very end)?

I think the answer to the second question is “fewer”. Perhaps the answer to the first question is “somewhat more” but I think that’s less clear.

--

All things considered, I think the probability of a short timeline scenario (i.e. AGI within 10 years) is not more than 1-2%. What am I missing?