Conversation with Robin Hanson on AI risk and forecasting

Link post

You can see a full transcript of this conversation, or listen to an audio recording, on our website.

Summary

We spoke with Robin Hanson on September 5, 2019. Here is a brief summary of that conversation:

  • Hanson thinks that now is the wrong time to put a lot of effort into addressing AI risk:

    • We will know more about the problem later, and there’s an opportunity cost to spending resources now vs later, so there has to be a compelling reason to spend resources now instead.

    • Hanson is not compelled by existing arguments he’s heard that would argue that we need to spend resources now:

    • Hanson thinks that we will see concrete signatures of problems before it’s too late– he is skeptical that there are big things that have to be coordinated ahead of time.

      • Relatedly, he thinks useful work anticipating problems in advance usually happens with concrete designs, not with abstract descriptions of systems.

    • Hanson thinks we are still too far away from AI for field-building to be useful.

  • Hanson thinks AI is probably at least a century, perhaps multiple centuries away:

    • Hanson thinks the mean estimate for human-level AI arriving is long, and he thinks AI is unlikely to be ‘lumpy’ enough to happen without much warning :

      • Hanson is interested in how ‘lumpy’ progress in AI is likely to be: whether progress is likely to come in large chunks or in a slower and steadier stream.

        • Measured in terms of how much a given paper is cited, academic progress is not lumpy in any field.

        • The literature on innovation suggests that innovation is not lumpy: most innovation is lots of little things, though once in a while there are a few bigger things.

    • From an outside view perspective, the current AI boom does not seem different from previous AI booms.

    • We don’t have a good sense of how much research needs to be done to get to human-level AI.

    • If we don’t expect progress to be particularly lumpy, and we don’t have a good sense of exactly how close we are, we have good reason to think we are not e.g. five-years away rather than halfway.

    • Hanson thinks we shouldn’t believe it when AI researchers give 50-year timescales:

      • Rephrasing the question in different ways, e.g. “When will most people lose their jobs?” causes people to give different timescales.

      • People consistently give overconfident estimates when they’re estimating things that are abstract and far away.

  • Hanson thinks AI risk takes up far too large a fraction of people thinking seriously about the future.

    • Hanson thinks more futurists should be exploring other future scenarios, roughly proportionally to how likely they are with some kicker for extremity of consequences.

    • Hanson doesn’t think that AI is that much worse than other future scenarios in terms of how much future value is likely to be destroyed.

  • Hanson thinks the key to intelligence is having many not-fully-general tools:

    • Most of the value in tools is in more specific tools, and we shouldn’t expect intelligence innovation to be different.

    • Academic fields are often simplified to simple essences, but real-life things like biological organisms and the industrial world progress via lots of little things, and we should expect intelligence to be more similar to the latter examples.

  • Hanson says the literature on human uniqueness suggests cultural evolution and language abilities came from several modest brain improvements, not clear differences in brain architecture.

  • Hanson worries that having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering.

  • Hanson would be interested in seeing more work on the following things:

    • Seeing examples of big, lumpy innovations that made a big difference to the performance of a system. This could change Hanson’s view of intelligence.

      • In particular, he’d be influenced by evidence for important architectural differences in the brains of humans vs. primates.

    • Tracking of the automation of U.S. jobs over time as a potential proxy for AI progress.

  • Hanson thinks there’s a lack of engagement with critics from people concerned about AI risk.

    • Hanson is interested in seeing concrete outside-view models people have for why AI might be soon.

    • Hanson is interested in proponents of AI risk responding to the following questions:

      • Setting aside everything you know except what this looks like from the outside, would you predict AGI happening soon?

      • Should reasoning around AI risk arguments be compelling to outsiders outside of AI?

      • What percentage of people who agree with you that AI risk is big, agree for the same reasons that you do?

  • Hanson thinks even if we tried, we wouldn’t now be able to solve all the small messy problems that insects can solve, indicating that it’s not sufficient to have insect-level amounts of hardware.

    • Hanson thinks that AI researchers might argue that we can solve the core functionalities of insects, but Hanson thinks that their intelligence is largely in being able to do many small things in complicated environments, robustly.

Small sections of the original audio recording have been removed. The corresponding transcript has been lightly edited for concision and clarity.

No comments.