I kind of hate to say this, but in the last year I’ve become much less enamored by this broad idea. Due to advances in LLMs, my guess now is that: 1. People will ask LLMs for ideas/forecasts at the point that they need them, and the LLMs will do much of the work right then. 2. In terms of storing information and insights about the world, Scorable functions are probably not the best (it’s not clear what is) 3. Ideally, we could basically treat the LLMs as the “Scorable Function”. As in, we have a rating for how good a full LLM is. This becomes more important than any Scorable Function.
That said, Scorable Functions could be a decent form of LLM output here and there. It would be obvious to train LLMs to be great at outputting Scorable Functions.
I kind of hate to say this, but in the last year I’ve become much less enamored by this broad idea. Due to advances in LLMs, my guess now is that:
1. People will ask LLMs for ideas/forecasts at the point that they need them, and the LLMs will do much of the work right then.
2. In terms of storing information and insights about the world, Scorable functions are probably not the best (it’s not clear what is)
3. Ideally, we could basically treat the LLMs as the “Scorable Function”. As in, we have a rating for how good a full LLM is. This becomes more important than any Scorable Function.
That said, Scorable Functions could be a decent form of LLM output here and there. It would be obvious to train LLMs to be great at outputting Scorable Functions.
More info here:
https://forum.effectivealtruism.org/posts/mopsmd3JELJRyTTty/ozzie-gooen-s-shortform?commentId=vxiAAoHhmQqe2Afc9