Executive summary: A post from Obsolete, a Substack newsletter about AI, capitalism, and geopolitics, reports that Joaquin Quiñonero Candela has quietly stepped down as OpenAI’s head of catastrophic risk preparedness, highlighting a broader pattern of leadership turnover, decreasing transparency, and growing concerns about OpenAI’s commitment to AI safety amid mounting external pressure and internal restructuring.
Key points:
Candela’s quiet transition and shifting focus: Joaquin Quiñonero Candela, formerly head of OpenAI’s Preparedness team for catastrophic risks, has stepped down and taken a non-safety-related intern role within the company without a formal announcement.
Recurring instability in safety leadership: His departure follows the earlier reassignment of Aleksander Mądry and marks the second major change in the Preparedness team’s short history, reflecting a pattern of opaque leadership changes.
Broader exodus of safety personnel: Multiple key figures from OpenAI’s safety teams, including cofounders and alignment leads, have left in the past year, many citing disillusionment with the company’s shifting priorities away from safety toward rapid product development.
Governance structures remain unclear: While OpenAI has established new committees like the Safety Advisory Group (SAG) and the Safety and Security Committee (SSC), their internal operations, leadership, and membership are largely undisclosed or siloed, raising concerns about accountability.
Reduced safety transparency and practices: The company has recently released models like GPT-4.1 without accompanying safety documentation, and critics argue that OpenAI is quietly rolling back earlier safety commitments — such as pre-release testing for fine-tuned risky models — even as external commitments remain voluntary.
Competitive pressure and regulatory resistance: The post warns that companies like OpenAI and Google are increasingly prioritizing speed over safety, while lobbying against proposed regulation like California’s SB 1047, potentially leaving critical AI safety gaps unaddressed as model capabilities grow.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: A post from Obsolete, a Substack newsletter about AI, capitalism, and geopolitics, reports that Joaquin Quiñonero Candela has quietly stepped down as OpenAI’s head of catastrophic risk preparedness, highlighting a broader pattern of leadership turnover, decreasing transparency, and growing concerns about OpenAI’s commitment to AI safety amid mounting external pressure and internal restructuring.
Key points:
Candela’s quiet transition and shifting focus: Joaquin Quiñonero Candela, formerly head of OpenAI’s Preparedness team for catastrophic risks, has stepped down and taken a non-safety-related intern role within the company without a formal announcement.
Recurring instability in safety leadership: His departure follows the earlier reassignment of Aleksander Mądry and marks the second major change in the Preparedness team’s short history, reflecting a pattern of opaque leadership changes.
Broader exodus of safety personnel: Multiple key figures from OpenAI’s safety teams, including cofounders and alignment leads, have left in the past year, many citing disillusionment with the company’s shifting priorities away from safety toward rapid product development.
Governance structures remain unclear: While OpenAI has established new committees like the Safety Advisory Group (SAG) and the Safety and Security Committee (SSC), their internal operations, leadership, and membership are largely undisclosed or siloed, raising concerns about accountability.
Reduced safety transparency and practices: The company has recently released models like GPT-4.1 without accompanying safety documentation, and critics argue that OpenAI is quietly rolling back earlier safety commitments — such as pre-release testing for fine-tuned risky models — even as external commitments remain voluntary.
Competitive pressure and regulatory resistance: The post warns that companies like OpenAI and Google are increasingly prioritizing speed over safety, while lobbying against proposed regulation like California’s SB 1047, potentially leaving critical AI safety gaps unaddressed as model capabilities grow.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.