[Question] How independent is the research coming out of OpenAI’s preparedness team?

For instance, how independent is research such as “Building an early warning system for LLM-aided biological threat creation”?

Edit: By “how independent”, I mean something like “how truth seeking is the preparedness team vs. how much are they confirming what’s in OpenAI’s monetary interest (to whatever extent these two aims come into tension)”?