In this case, the equivalent is a “car safety” nonprofit that goes around to all the car companies to help them make safe cars. The AI safety initiatives would attempt to make sure that they can help or advise whatever groups do make an AGI. However, knowing how to advise those companies does require making a few cars internally for experimentation.
I believe that OpenAI basically publically stated that they are willing to work with any groups close to AGI, but forgot where they mentioned this.
Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
It’s also possible there won’t be much competition. There may only be 3-6 entities with serious chances of making an AGI. One idea is to have safety researchers in almost every entity.
It’s definitely an important question.
In this case, the equivalent is a “car safety” nonprofit that goes around to all the car companies to help them make safe cars. The AI safety initiatives would attempt to make sure that they can help or advise whatever groups do make an AGI. However, knowing how to advise those companies does require making a few cars internally for experimentation.
I believe that OpenAI basically publically stated that they are willing to work with any groups close to AGI, but forgot where they mentioned this.
It’s in their charter:
It’s also possible there won’t be much competition. There may only be 3-6 entities with serious chances of making an AGI. One idea is to have safety researchers in almost every entity.