Executive summary: In this wide-ranging interview, Sam Altman discusses the recent OpenAI board saga, the development of GPT-4 and future models, the path to AGI, and key challenges around AI safety, compute needs, and competition in the AI industry.
Key points:
Altman describes the OpenAI board saga as a painful but important learning experience about the need for robust governance as AI progresses. He is optimistic about the new board structure.
GPT-4 is an impressive leap forward, but Altman believes future models will make it seem limited in hindsight. He is most excited about AI systems becoming “smarter” in a general sense.
Altman expects AI to dramatically change knowledge work in the coming years, with things like programming increasingly done via natural language. He sees compute becoming a precious commodity.
On the path to AGI, Altman believes the ability to significantly accelerate scientific progress will be a key milestone. He thinks AGI is likely by the end of the decade, but emphasizes the difficulty of defining AGI precisely.
Altman sees AI safety as an increasingly central focus, with key challenges including technical robustness, social impacts, and security against bad actors. But he currently worries more about other risks than AI itself becoming uncontrollable.
On the competition between AI companies, Altman sees positives in faster innovation but potential risks in an uncontrolled “arms race.” He emphasizes the importance of collaboration on safety.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
How are people just letting him get away with a victim narrative?
Executive summary: In this wide-ranging interview, Sam Altman discusses the recent OpenAI board saga, the development of GPT-4 and future models, the path to AGI, and key challenges around AI safety, compute needs, and competition in the AI industry.
Key points:
Altman describes the OpenAI board saga as a painful but important learning experience about the need for robust governance as AI progresses. He is optimistic about the new board structure.
GPT-4 is an impressive leap forward, but Altman believes future models will make it seem limited in hindsight. He is most excited about AI systems becoming “smarter” in a general sense.
Altman expects AI to dramatically change knowledge work in the coming years, with things like programming increasingly done via natural language. He sees compute becoming a precious commodity.
On the path to AGI, Altman believes the ability to significantly accelerate scientific progress will be a key milestone. He thinks AGI is likely by the end of the decade, but emphasizes the difficulty of defining AGI precisely.
Altman sees AI safety as an increasingly central focus, with key challenges including technical robustness, social impacts, and security against bad actors. But he currently worries more about other risks than AI itself becoming uncontrollable.
On the competition between AI companies, Altman sees positives in faster innovation but potential risks in an uncontrolled “arms race.” He emphasizes the importance of collaboration on safety.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.