[Question] Does “calibrated probability assessment” training work?

In “How to Measure Anything” chapter 5, Douglas Hubbard describes the training he provides to individuals and organizations that want to improve their skills. He provides a sample test which is based on general knowledge trivia, questions like

“What is the air distance from LA to NY?”

for which the student is supposed to provide a 90% confidence interval. There are also some true/​false questions where you provide your level of confidence in the answer e.g.

“Napoleon was born on Corsica”.

In the following few pages he describes some of the data he’s collected about his trainees implying this sort of practice helps people become better estimators of various things, including forecasting the likelihood of future events. For example, he describes CTO’s making more accurate predictions of new tech after completing training.

My question: Is there evidence this approach works? Does practice making probabilistic estimates about trivia improve people’s ability to forecast non-trivial matters? Have there been published studies?

Thanks!