Mustafa Suleyman is influential as a former founder of Deepmind and CEO of Inflection.AI which some have suggested should be considered a major lab.
[Question] What do we know about Mustafa Suleyman’s position on AI Safety?
- 13 Aug 2023 23:00 UTC; 4 points) 's comment on Inflection.ai is a major AGI lab by (LessWrong;
Inflection mentions “safety” and “alignment” but treats safety in a prosaic manner and doesn’t really engage with misalignment. It seems much worse than Anthropic, OpenAI, and DeepMind on planning for misalignment; it doesn’t seem to have a safety plan or realize it needs one.
Suleyman signed the CAIS letter.
Inflection joined the White House voluntary commitments (and commented here).
Piece he wrote: The AI Power Paradox (Foreign Affairs, Aug 2023). Discusses AI risk and proposes policy responses. See also associated CNN appearance.
It seems he was risk-skeptical in 2015. Probably there are more old quotes/sources.
I believe he’s recently said things like both ‘AI risk is a big deal’ and ‘AI risk isn’t a big deal’ (don’t have sources right now).
Piece he wrote: Humans and AI Will Understand Each Other Better Than Ever (WIRED, Dec 2022).
Interviews and conversations:
Barron’s, July
Channel 4 News, July
Possible, June
On with Kara Swisher, June
No Priors, May
Relevant twitter quotes of 2023:
left unchecked, naive open source—in 20 yrs time—will almost certainly cause catastrophe
Progress is not slowing but speeding up
[pro-regulation]
We should legally ban use of AIs and chatbots in any kind of electioneering
It’s time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is “Scale & Capabilities Audits”
this kind of all out alarmism from CNN is becoming unhinged. I’m in the gym and it’s flashing up just before every ad break. totally ridiculous imho https://cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
LLM hallucinations will be largely eliminated by 2025 [+ clarification]
[if powerful models get much smaller] they will proliferate far and wide… this is the containment problem in a nutshell
[pro-Hoffman]
Powerful AI systems are inevitable. Strict licensing and regulation is also inevitable. The key thing from here is getting the safest and most widely beneficial versions of both.
[pro-Anthropic on safety]
Uncategorized updates:
https://www.wired.com/story/have-a-nice-future-podcast-18/
https://twitter.com/inflectionAI/status/1691943737969262820
https://www.samharris.org/podcasts/making-sense-episodes/332-can-we-contain-artificial-intelligence
https://www.ft.com/content/f828fef3-862c-4022-99d0-41efbc73db80
https://lithub.com/mustafa-suleyman-on-the-coming-wave-of-technological-disruption/
https://time.com/6310115/ai-revolution-reshape-the-world/
***https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/
Not as much as we’ll know when his book comes out next month! For now, his cofounder Reid Hoffman has said some reasonable things about legal liability and rogue AI agents, though he’s not expressing concern about x-risks:
Sounds x-risk pilled here: https://open.spotify.com/episode/6TiIgfJ18HEFcUonJFMWaP?si=P6iTLy6LSvq3pH6I1aovWw