Generate high-quality forecasts on-demand, rather than relying on pre-computed forecasts for scoring
Leverage repositories of key insights, though likely not in the form of formal probabilistic mathematical models
To be clear, I think there’s a lot of batch intellectual work we can do before users ask for specific predictions. So “Generating high-quality forecasts on-demand” doesn’t mean “doing all the intellectual work on-demand.”
However, I think there’s a broad set of information that this batch intellectual work could look like. I used to think that this batch work would produce a large set of connect mathematic models. Now I think we probably want something very compressed. If a certain mathematical model can easily be generated on-demand, then there’s not much of a benefit to having it made and saved ahead of time. However, I’m sure there are many crucial insights that are both expensive to find, and would be useful for many questions that LLM users ask about.
So instead of searching for and saving math models, a system might do a bunch of intellectual work and save statements like, ”When estimating the revenue of OpenAI, remember crucial considerations [A] and [B]. Also, a surprisingly good data source for this is Twitter user ai-gnosis-34.”
A lot of user-provided forecasts or replies should basically be the “last mile” or intellectual work. All the key insights are already found, now there just needs to be a bit of customization for the very specific questions someone has.
A bit more on this part:
To be clear, I think there’s a lot of batch intellectual work we can do before users ask for specific predictions. So “Generating high-quality forecasts on-demand” doesn’t mean “doing all the intellectual work on-demand.”
However, I think there’s a broad set of information that this batch intellectual work could look like. I used to think that this batch work would produce a large set of connect mathematic models. Now I think we probably want something very compressed. If a certain mathematical model can easily be generated on-demand, then there’s not much of a benefit to having it made and saved ahead of time. However, I’m sure there are many crucial insights that are both expensive to find, and would be useful for many questions that LLM users ask about.
So instead of searching for and saving math models, a system might do a bunch of intellectual work and save statements like,
”When estimating the revenue of OpenAI, remember crucial considerations [A] and [B]. Also, a surprisingly good data source for this is Twitter user ai-gnosis-34.”
A lot of user-provided forecasts or replies should basically be the “last mile” or intellectual work. All the key insights are already found, now there just needs to be a bit of customization for the very specific questions someone has.