I’ve heard OpenAI employees talk about the relatively high amount of compute superalignment has (complaining superalignment has too much and they, employees outside superalignment, don’t have enough). In conversations with superalignment people, I noticed they talk about it as a real strategic asset (“make sure we’re ready to use our compute on automated AI R&D for safety”) rather than just an example of safety washing. This was something Ilya pushed for back when he was there.
I’ve heard OpenAI employees talk about the relatively high amount of compute superalignment has (complaining superalignment has too much and they, employees outside superalignment, don’t have enough). In conversations with superalignment people, I noticed they talk about it as a real strategic asset (“make sure we’re ready to use our compute on automated AI R&D for safety”) rather than just an example of safety washing. This was something Ilya pushed for back when he was there.
Ilya is no longer on the Superalignment team?