I disagree completely. It seems like the kinds of things they could have done to not be subject to this bug would be e.g.
Basically be expert maintainers of all their dependencies who work full-time to fuzz test them or prove them correct (if they did this they would never have been able to release their website)
Magically pick high-quality dependencies better than they already do, without doing the above (no reason to believe this is possible, since the bug was in an old, first-party, actively maintained dependency of a software project widely respected for its quality (Redis))
Have some kind of single-tenant-per-stack setup where you have some kind of database/server all to yourself which serves your connections to GPT (AFAIK totally ridiculous—what would be the cost/benefit value of running their API and website like this?)
Since, to my eyes, every single software organization in the world that has ever produced a public website would have been basically equally likely to get hit by this bug, I totally disagree that it’s useful evidence about anything about OpenAI’s culture, other than “I guess their culture is not run by superintelligent aliens who run at 100x human speed and proved all of their code correct before releasing the website.” I agree, it’s too bad that OpenAI is not that.
What is the thing that you thought they might have done differently, such that you are updating on them not having done that thing?
For reference, the bug: https://github.com/redis/redis-py/issues/2624
Thanks for this post. I am a software engineer recently trying to do specifically altruistic work. By nature, I am kind of disdainful of PR and of most existing bureaucracies and authorities, so your emphasis on how important interoperating with those systems was for your work is very useful input to help me switch into “actually trying to be altruistic” mode.