They could exclusively deploy their best models internally, or limit the volume of inference that external users can do, if running AI researchers to do R&D is compute-intensive.
There are already present-day versions of this dilemma. OpenAI claims that DeepSeek used OpenAI model outputs to train its own models, and OpenAI doesn’t reveal their reasoning models’ full chains of thought to prevent competitors from using it as training data.
Note that Thorstad’s arguments apply more against strong longtermism, i.e. that future generations are overwhelmingly or astronomically more important than current generations, not merely that they are important or even much more important than current generations.