Inflection mentions “safety” and “alignment” but treats safety in a prosaic manner and doesn’t really engage with misalignment. It seems much worse than Anthropic, OpenAI, and DeepMind on planning for misalignment; it doesn’t seem to have a safety plan or realize it needs one.
What are your predictions on how AI can lead to prosperity? But also, on the flip side, how can it disrupt society?
I do think it’s going to be the most productive decade in the history of our species. Anyone who is a creator or an inventor is now going to have a compadre who gets their domain.
People who are trying to be productive are now going to have an aide that is going to turbocharge their productivity. That’s going to save people an insane amount of time. It’s going to make us much more creative and inventive.
On the flip side, anyone who has an agenda to cause disruption, cause chaos, or spread misinformation, is also going to have the barriers of entry for their destabilization efforts lowered.
Technology tends to accelerate offense and defense at the same time. A knife can be used to cut tomatoes or to hurt somebody. That’s the challenge of the coming wave. It’s about containment. How do nation states control the proliferation of very powerful technologies, which can ultimately be a threat to the existence of the nation state if they are left unchecked?
Inflection mentions “safety” and “alignment” but treats safety in a prosaic manner and doesn’t really engage with misalignment. It seems much worse than Anthropic, OpenAI, and DeepMind on planning for misalignment; it doesn’t seem to have a safety plan or realize it needs one.
Suleyman signed the CAIS letter.
Inflection joined the White House voluntary commitments (and commented here).
Piece he wrote: The AI Power Paradox (Foreign Affairs, Aug 2023). Discusses AI risk and proposes policy responses. See also associated CNN appearance.
It seems he was risk-skeptical in 2015. Probably there are more old quotes/sources.
I believe he’s recently said things like both ‘AI risk is a big deal’ and ‘AI risk isn’t a big deal’ (don’t have sources right now).
Piece he wrote: Humans and AI Will Understand Each Other Better Than Ever (WIRED, Dec 2022).
Interviews and conversations:
Barron’s, July
Channel 4 News, July
Possible, June
On with Kara Swisher, June
No Priors, May
Relevant twitter quotes of 2023:
left unchecked, naive open source—in 20 yrs time—will almost certainly cause catastrophe
Progress is not slowing but speeding up
[pro-regulation]
We should legally ban use of AIs and chatbots in any kind of electioneering
It’s time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is “Scale & Capabilities Audits”
this kind of all out alarmism from CNN is becoming unhinged. I’m in the gym and it’s flashing up just before every ad break. totally ridiculous imho https://cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
LLM hallucinations will be largely eliminated by 2025 [+ clarification]
[if powerful models get much smaller] they will proliferate far and wide… this is the containment problem in a nutshell
[pro-Hoffman]
Powerful AI systems are inevitable. Strict licensing and regulation is also inevitable. The key thing from here is getting the safest and most widely beneficial versions of both.
[pro-Anthropic on safety]
Uncategorized updates:
https://www.wired.com/story/have-a-nice-future-podcast-18/
https://twitter.com/inflectionAI/status/1691943737969262820
https://www.samharris.org/podcasts/making-sense-episodes/332-can-we-contain-artificial-intelligence
https://www.ft.com/content/f828fef3-862c-4022-99d0-41efbc73db80
https://lithub.com/mustafa-suleyman-on-the-coming-wave-of-technological-disruption/
https://time.com/6310115/ai-revolution-reshape-the-world/
***https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/