This is a great exercise. I am definitely concerned about the endogenous nature of predictions: I think you are definitely right that people are more likely to offer predictions on the aspects of their project that are easier to predict, especially since the prompt you showed explicitly asks them in an open-ended way. A related issue is that people may be more comfortable making predictions about less important aspects of the project, since the consequences of being wrong are lower. If this is happening, then this forecasting accuracy wouldn’t generalize at all.
Both of these issues can be partly addressed if the predictions are solicited by another person reading the writeup, rather than chosen by the writer. For example, Alice writes up an investigation into human challenge trials as a strategy for medical R&D, Bob reads it and asks Alice for some predictions that he feels are important to complementing the writeup e.g. “will human challenge trials be used for any major disease by the start of 2023?” and “will the US take steps to encourage human challenge trials by the start of 2024?”
This obviously helps avoid Alice making predictions about only easier questions, and it also ensures that the predictions being made are actually decision-relevant (since they are being solicited by someone else who serves the role of an intelligent layperson/policymaker reading the report). Seems like a win-win to me.
A related issue is that people may be more comfortable making predictions about less important aspects of the project, since the consequences of being wrong are lower
I’m actually concerned about the same thing but for exactly the opposite reason, i.e. that because the consequences of being wrong (a hit to one’s Brier score) are the same regardless of the importance of the prediction people might allocate the same time and effort to any prediction, including the more important ones that should perhaps warrant closer examination.
We’re currently trialing some of the stuff you suggest about bringing in other people to suggest predictions. This might be an improvement, but it’s too early to say, and scaling it up wouldn’t be easy for a few reasons:
It’s hard to make good predictions about a grant without lots of context.
Grant investigators are very time-constrained, so they can’t afford to provide that context by having a lot of back and forth with the person suggesting the predictions.
Most of the information needed to gain context about the grant is confidential by default.
This is a great exercise. I am definitely concerned about the endogenous nature of predictions: I think you are definitely right that people are more likely to offer predictions on the aspects of their project that are easier to predict, especially since the prompt you showed explicitly asks them in an open-ended way. A related issue is that people may be more comfortable making predictions about less important aspects of the project, since the consequences of being wrong are lower. If this is happening, then this forecasting accuracy wouldn’t generalize at all.
Both of these issues can be partly addressed if the predictions are solicited by another person reading the writeup, rather than chosen by the writer. For example, Alice writes up an investigation into human challenge trials as a strategy for medical R&D, Bob reads it and asks Alice for some predictions that he feels are important to complementing the writeup e.g. “will human challenge trials be used for any major disease by the start of 2023?” and “will the US take steps to encourage human challenge trials by the start of 2024?”
This obviously helps avoid Alice making predictions about only easier questions, and it also ensures that the predictions being made are actually decision-relevant (since they are being solicited by someone else who serves the role of an intelligent layperson/policymaker reading the report). Seems like a win-win to me.
I’m actually concerned about the same thing but for exactly the opposite reason, i.e. that because the consequences of being wrong (a hit to one’s Brier score) are the same regardless of the importance of the prediction people might allocate the same time and effort to any prediction, including the more important ones that should perhaps warrant closer examination.
We’re currently trialing some of the stuff you suggest about bringing in other people to suggest predictions. This might be an improvement, but it’s too early to say, and scaling it up wouldn’t be easy for a few reasons:
It’s hard to make good predictions about a grant without lots of context.
Grant investigators are very time-constrained, so they can’t afford to provide that context by having a lot of back and forth with the person suggesting the predictions.
Most of the information needed to gain context about the grant is confidential by default.