Are you sure that this is the standard way in competitions? It is absolutely correct that before the final submission, one would find the best model by fitting it on a train set and evaluating it on the test set. However, once you found a best performing model that way, there is no reason not to train the model with the best parameters on the train+test set, and submit that one. (Submission are the predictions of the model on the validation set, not the parameters of the model). After all, more data equals better performance.
Are you sure that this is the standard way in competitions? It is absolutely correct that before the final submission, one would find the best model by fitting it on a train set and evaluating it on the test set. However, once you found a best performing model that way, there is no reason not to train the model with the best parameters on the train+test set, and submit that one. (Submission are the predictions of the model on the validation set, not the parameters of the model). After all, more data equals better performance.