Unfortunately it all goes back to inadequate math education and effective disinformation campaigns. Whether it was tobacco or climate change, those who opposed change and regulation have always focused on uncertainty as a reason not to act, or to delay. And they have succeeded in convincing the vast majority of the public. The mentality is: “even the scientists don’t agree on whether we’ll have a global catastrophe or total human extinction—so until we’re sure which one it is, let’s just keep using fossil fuels and pumping out carbon dioxide.”
With AI, I liken most of humanity’s mentality to that of a lazy father watching a football game who needs a soda. And there is a store just across a busy highway from his house. He could go get the soda, but he might miss an important score. So instead he sends his 7-year-old son to the store. Because, realistically, there’s a good chance that his son won’t get hit by a car, while if he goes himself, it is certain that he’ll miss a part of the game.
No parent would think like that. But when it comes to AI, that’s how we think.
And timelines are just the nth excuse to keep thinking that way. “We don’t need to act yet, it mightn’t happen for 5 years—some people say even 10 years.”
The challenge for us is to somehow wake people up before it’s too late, and despite the fact that the people who are in the best position to pause are the most gung-ho of all, whether they are CEO’s or US president, because they personally have everything to gain from accelerating AI, even if it ends up screwing everyone else (and let’s be realistic, they don’t really care about anyone else).
Nice post and I fully agree.
Unfortunately it all goes back to inadequate math education and effective disinformation campaigns. Whether it was tobacco or climate change, those who opposed change and regulation have always focused on uncertainty as a reason not to act, or to delay. And they have succeeded in convincing the vast majority of the public. The mentality is: “even the scientists don’t agree on whether we’ll have a global catastrophe or total human extinction—so until we’re sure which one it is, let’s just keep using fossil fuels and pumping out carbon dioxide.”
With AI, I liken most of humanity’s mentality to that of a lazy father watching a football game who needs a soda. And there is a store just across a busy highway from his house. He could go get the soda, but he might miss an important score. So instead he sends his 7-year-old son to the store. Because, realistically, there’s a good chance that his son won’t get hit by a car, while if he goes himself, it is certain that he’ll miss a part of the game.
No parent would think like that. But when it comes to AI, that’s how we think.
And timelines are just the nth excuse to keep thinking that way. “We don’t need to act yet, it mightn’t happen for 5 years—some people say even 10 years.”
The challenge for us is to somehow wake people up before it’s too late, and despite the fact that the people who are in the best position to pause are the most gung-ho of all, whether they are CEO’s or US president, because they personally have everything to gain from accelerating AI, even if it ends up screwing everyone else (and let’s be realistic, they don’t really care about anyone else).