I agree such commitments are worth noticing and I hope OpenAI and other labs make such commitments in the future. But this commitment is not huge: it’s just “20% of the compute we’ve secured to date” (in July 2023), to be used “over the next four years.” It’s unclear how much compute this is, and with compute use increasing exponentially it may be quite little in 2027. Possibly you have private information but based on public information the minimum consistent with the commitment is quite little.
It would be great if OpenAI or others committed 20% of their compute to safety! Even 5% would be nice.
I’ve heard OpenAI employees talk about the relatively high amount of compute superalignment has (complaining superalignment has too much and they, employees outside superalignment, don’t have enough). In conversations with superalignment people, I noticed they talk about it as a real strategic asset (“make sure we’re ready to use our compute on automated AI R&D for safety”) rather than just an example of safety washing. This was something Ilya pushed for back when he was there.
I agree such commitments are worth noticing and I hope OpenAI and other labs make such commitments in the future. But this commitment is not huge: it’s just “20% of the compute we’ve secured to date” (in July 2023), to be used “over the next four years.” It’s unclear how much compute this is, and with compute use increasing exponentially it may be quite little in 2027. Possibly you have private information but based on public information the minimum consistent with the commitment is quite little.
It would be great if OpenAI or others committed 20% of their compute to safety! Even 5% would be nice.
I’ve heard OpenAI employees talk about the relatively high amount of compute superalignment has (complaining superalignment has too much and they, employees outside superalignment, don’t have enough). In conversations with superalignment people, I noticed they talk about it as a real strategic asset (“make sure we’re ready to use our compute on automated AI R&D for safety”) rather than just an example of safety washing. This was something Ilya pushed for back when he was there.
Ilya is no longer on the Superalignment team?