Really great discussion. How can we get this kind of information out into the general population?
IMHO the biggest challenge we face is convincing people that the default outcome, if we do nothing, is more likely to be that we get an AI which is much more powerful than humans. Tom and Luisa, you do a great job of making this case. If someone disagrees, it is up to them to demonstrate where is the flaw in the logic.
I think we face three critical challenges in getting people to act on this as urgently as we need to:
Our brains have evolved to learn from experience. As with climate change and even nuclear war, we have a deep-seated belief that things will change only gradually, and especially, that they will not change dramatically in negative ways, because in our lifetimes, that is what we’ve always experienced—even the most scary crises have tended to work out OK*, or where they haven’t (e.g. climate change), we’ve somehow convinced ourselves that they have. And this belief is OK, until suddenly it’s not, and then it’s too late.
Most people know the word “exponential” but don’t really understand what it means. The exponential growth you describe here, where each generation of AI can develop a new generation of AI that is X% better and faster, is just beyond most of our experience. It reminded me of chemistry experiments with acid-base titrations and the virtual impossibility of titrating to exactly pH 7 with strong acids and bases. If we imagine human-level AGI being at pH 7, we might feel comfortable that the pH has grown from 1.1 to 1.2 to 1.3 to 1.5 to 1.8, but not be aware that the next drop of NaOH will make it 10.4.
Our brains have evolved to live in denial. This is a vital and mostly positive trait. For example, we all know that we’re going to die, but we put this out of our mind almost all of the time. When faced with something really frightening (and even out-of-control AI isn’t quite as scary as death), we’re able to put it to the back of our minds. Not consciously deny it, not intellectually deny it, but rather accept that it is logically true, and then ignore it and act like it wasn’t true. Of course, it helps that there will always be those who will take advantage of this and make (usually flawed) arguments that the concerns are unwarranted, which then give us plausible deniability and make us feel even less worried.
All this means that most of us (I include myself in this) read this article, fully accept that Tom’s arguments are compelling, realise that we absolutely must do something, but somehow do not rush out and storm the parliament demanding immediate action. Instead, we go on to the next item on our to-do list, maybe laundry or grocery shopping … I’m really determined to figure out a way to overcome this inertia.
*Obviously this is true for those of us in the current generation, in the West. I’m sure those who lived through world wars or famines or national wars, even those today in Syria or Ukraine or Sudan, will have a better understanding of how things can suddenly go wrong. But most of the people taking the decisions about AI have never experienced anything like that.
Really great discussion. How can we get this kind of information out into the general population?
IMHO the biggest challenge we face is convincing people that the default outcome, if we do nothing, is more likely to be that we get an AI which is much more powerful than humans. Tom and Luisa, you do a great job of making this case. If someone disagrees, it is up to them to demonstrate where is the flaw in the logic.
I think we face three critical challenges in getting people to act on this as urgently as we need to:
Our brains have evolved to learn from experience. As with climate change and even nuclear war, we have a deep-seated belief that things will change only gradually, and especially, that they will not change dramatically in negative ways, because in our lifetimes, that is what we’ve always experienced—even the most scary crises have tended to work out OK*, or where they haven’t (e.g. climate change), we’ve somehow convinced ourselves that they have. And this belief is OK, until suddenly it’s not, and then it’s too late.
Most people know the word “exponential” but don’t really understand what it means. The exponential growth you describe here, where each generation of AI can develop a new generation of AI that is X% better and faster, is just beyond most of our experience. It reminded me of chemistry experiments with acid-base titrations and the virtual impossibility of titrating to exactly pH 7 with strong acids and bases. If we imagine human-level AGI being at pH 7, we might feel comfortable that the pH has grown from 1.1 to 1.2 to 1.3 to 1.5 to 1.8, but not be aware that the next drop of NaOH will make it 10.4.
Our brains have evolved to live in denial. This is a vital and mostly positive trait. For example, we all know that we’re going to die, but we put this out of our mind almost all of the time. When faced with something really frightening (and even out-of-control AI isn’t quite as scary as death), we’re able to put it to the back of our minds. Not consciously deny it, not intellectually deny it, but rather accept that it is logically true, and then ignore it and act like it wasn’t true. Of course, it helps that there will always be those who will take advantage of this and make (usually flawed) arguments that the concerns are unwarranted, which then give us plausible deniability and make us feel even less worried.
All this means that most of us (I include myself in this) read this article, fully accept that Tom’s arguments are compelling, realise that we absolutely must do something, but somehow do not rush out and storm the parliament demanding immediate action. Instead, we go on to the next item on our to-do list, maybe laundry or grocery shopping … I’m really determined to figure out a way to overcome this inertia.
*Obviously this is true for those of us in the current generation, in the West. I’m sure those who lived through world wars or famines or national wars, even those today in Syria or Ukraine or Sudan, will have a better understanding of how things can suddenly go wrong. But most of the people taking the decisions about AI have never experienced anything like that.