To me, this is evidence that culturally, OpenAI is not operating with a “security mindset”. In my experience this sort of thing is relatively uniform across a company’s culture, so if user data is not being treated in a secure way, we might conclude that the AI development work itself is likewise not being treated with the thoughtfulness that engineering against threat actors requires.
ChatGPT bug leaked users’ conversation histories
Link post
To me, this is evidence that culturally, OpenAI is not operating with a “security mindset”. In my experience this sort of thing is relatively uniform across a company’s culture, so if user data is not being treated in a secure way, we might conclude that the AI development work itself is likewise not being treated with the thoughtfulness that engineering against threat actors requires.