Michael—thanks for summarizing this alarming development.
I suspect that in 50 to 100 years, these tech CEOs, and the AI researchers who worked for them, may be remembered as some of the most reckless, irresponsible, hubristic, and unethical humans who have ever influenced human history.
They have absolutely no democratic mandate from the 8 billion humans they will affect, to develop the systems they are developing. They have not made a compelling case to the public that the benefits will exceed the risks. They are paying lip service to AI safety while charging ahead at full speed towards a precipice.
IMHO, EAs should consider focusing a bit more on hitting the pause button on all advanced AI research, and stop pretending that ‘technical AI alignment research’ will significantly reduce any of the catastrophic risks from these corporate arms races.
Whatever benefits humanity may eventually derive from AI will still be there for the taking in 100 years, 500 years, 1,000 years. We may not live to see them, if AI doesn’t solve longevity in our lifetimes. But I’d rather see a future where AI research is paused for a century or two, and our great-grandkids have a fighting chance at survival, than one where we make a foolhardy bet that these AI companies are actually making rational risk/benefit decisions in our collective interests.
(Sorry for the feisty tone here, but I’m frustrated that so many EAs seem to put so much faith in these corporations and their ‘AI safety’ window dressing.)
Michael—thanks for summarizing this alarming development.
I suspect that in 50 to 100 years, these tech CEOs, and the AI researchers who worked for them, may be remembered as some of the most reckless, irresponsible, hubristic, and unethical humans who have ever influenced human history.
They have absolutely no democratic mandate from the 8 billion humans they will affect, to develop the systems they are developing. They have not made a compelling case to the public that the benefits will exceed the risks. They are paying lip service to AI safety while charging ahead at full speed towards a precipice.
IMHO, EAs should consider focusing a bit more on hitting the pause button on all advanced AI research, and stop pretending that ‘technical AI alignment research’ will significantly reduce any of the catastrophic risks from these corporate arms races.
Whatever benefits humanity may eventually derive from AI will still be there for the taking in 100 years, 500 years, 1,000 years. We may not live to see them, if AI doesn’t solve longevity in our lifetimes. But I’d rather see a future where AI research is paused for a century or two, and our great-grandkids have a fighting chance at survival, than one where we make a foolhardy bet that these AI companies are actually making rational risk/benefit decisions in our collective interests.
(Sorry for the feisty tone here, but I’m frustrated that so many EAs seem to put so much faith in these corporations and their ‘AI safety’ window dressing.)