I disagree completely. It seems like the kinds of things they could have done to not be subject to this bug would be e.g.
Basically be expert maintainers of all their dependencies who work full-time to fuzz test them or prove them correct (if they did this they would never have been able to release their website)
Magically pick high-quality dependencies better than they already do, without doing the above (no reason to believe this is possible, since the bug was in an old, first-party, actively maintained dependency of a software project widely respected for its quality (Redis))
Have some kind of single-tenant-per-stack setup where you have some kind of database/server all to yourself which serves your connections to GPT (AFAIK totally ridiculous—what would be the cost/benefit value of running their API and website like this?)
Since, to my eyes, every single software organization in the world that has ever produced a public website would have been basically equally likely to get hit by this bug, I totally disagree that it’s useful evidence about anything about OpenAI’s culture, other than “I guess their culture is not run by superintelligent aliens who run at 100x human speed and proved all of their code correct before releasing the website.” I agree, it’s too bad that OpenAI is not that.
What is the thing that you thought they might have done differently, such that you are updating on them not having done that thing?
Is your claim that e.g. Google or American Express would be equally likely as OpenAI to suffer this issue? If so I would definitely disagree. I would be extremely surprised to see this type of issue in e.g. gmail, and if it did occur I think it would be correctly perceived as a massive scandal. Yet Google is almost certainly using Redis for important use cases.
Part of having a security mindset is assuming that system components can fail (or be made to fail) in surprising ways and making sure that the overall system is resilient to those failures. This does not necessarily require, as you suggest, vetting every part of the system. After all, few organizations had vetted e.g. log4j, but that does not mean that all organizations were equally affected by the log4j vulnerability.
There are things that an organization could have done to prevent exposure to the problem with redis-py. Here are some examples:
Assert that the result coming back from Redis is for the correct/authenticated user, or otherwise fits in the appropriate context / is the expected response to the query.
Vet upgrades to libraries such as redis-py and don’t deploy new versions unless (a) they contain a security fix, (b) they have had some time to bake in or (c) you conducted a careful review of the differences.
Conduct pre-deployment testing under load to see that the overall application behaves as expected.
I’m not really trying to claim that this stuff is simple or easy, or that a security mindset is common among tech startups; just that these are the sort of steps that a security-oriented company would take, and the fact that OpenAI apparently did not take such steps is (limited) evidence that OpenAI management is not operating with a security mindset.
I disagree completely. It seems like the kinds of things they could have done to not be subject to this bug would be e.g.
Basically be expert maintainers of all their dependencies who work full-time to fuzz test them or prove them correct (if they did this they would never have been able to release their website)
Magically pick high-quality dependencies better than they already do, without doing the above (no reason to believe this is possible, since the bug was in an old, first-party, actively maintained dependency of a software project widely respected for its quality (Redis))
Have some kind of single-tenant-per-stack setup where you have some kind of database/server all to yourself which serves your connections to GPT (AFAIK totally ridiculous—what would be the cost/benefit value of running their API and website like this?)
Since, to my eyes, every single software organization in the world that has ever produced a public website would have been basically equally likely to get hit by this bug, I totally disagree that it’s useful evidence about anything about OpenAI’s culture, other than “I guess their culture is not run by superintelligent aliens who run at 100x human speed and proved all of their code correct before releasing the website.” I agree, it’s too bad that OpenAI is not that.
What is the thing that you thought they might have done differently, such that you are updating on them not having done that thing?
For reference, the bug: https://github.com/redis/redis-py/issues/2624
Is your claim that e.g. Google or American Express would be equally likely as OpenAI to suffer this issue? If so I would definitely disagree. I would be extremely surprised to see this type of issue in e.g. gmail, and if it did occur I think it would be correctly perceived as a massive scandal. Yet Google is almost certainly using Redis for important use cases.
Part of having a security mindset is assuming that system components can fail (or be made to fail) in surprising ways and making sure that the overall system is resilient to those failures. This does not necessarily require, as you suggest, vetting every part of the system. After all, few organizations had vetted e.g. log4j, but that does not mean that all organizations were equally affected by the log4j vulnerability.
There are things that an organization could have done to prevent exposure to the problem with redis-py. Here are some examples:
Assert that the result coming back from Redis is for the correct/authenticated user, or otherwise fits in the appropriate context / is the expected response to the query.
Vet upgrades to libraries such as redis-py and don’t deploy new versions unless (a) they contain a security fix, (b) they have had some time to bake in or (c) you conducted a careful review of the differences.
Conduct pre-deployment testing under load to see that the overall application behaves as expected.
I’m not really trying to claim that this stuff is simple or easy, or that a security mindset is common among tech startups; just that these are the sort of steps that a security-oriented company would take, and the fact that OpenAI apparently did not take such steps is (limited) evidence that OpenAI management is not operating with a security mindset.