I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.
I clarified some of my epistemic concerns.
I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important. And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction.