I would say we are basically on the exact same page in terms of the overall vision. I’m also trying to get at these logical chains of information that we can travel backwards through to easily sanity check and also do data analysis.
Where I think we break is if there is no underlying structure to these logical chains outside of a bunch of arrows pointing between links, it reduces our ability to automate and take away insights.
A few examples
you link to a ea forum post with multiple claims. In order to build logical chains, we now need a database to store each claim in each post. In order to do this, we now need to convince everyone to use certain formatting on claims or try to use an LLM to parse.
you link multiple sources, which themselves link multiple sources. Since linking is just drawing arrows in an abstract sense, I have no ability to discern how much each source went into the guess. I assume we would just use a uniform distribution to model how much each source went into the final guess? but this is clearly terribly off in many cases so we lose a lot of information.
If we link to models we hold a lot more information down the chain.
Overall I wouldn’t say my proposition isn’t a full substitute for your idea, but I think there is overlapping functionality.
I would say we are basically on the exact same page in terms of the overall vision. I’m also trying to get at these logical chains of information that we can travel backwards through to easily sanity check and also do data analysis.
Where I think we break is if there is no underlying structure to these logical chains outside of a bunch of arrows pointing between links, it reduces our ability to automate and take away insights.
A few examples
you link to a ea forum post with multiple claims. In order to build logical chains, we now need a database to store each claim in each post. In order to do this, we now need to convince everyone to use certain formatting on claims or try to use an LLM to parse.
you link multiple sources, which themselves link multiple sources. Since linking is just drawing arrows in an abstract sense, I have no ability to discern how much each source went into the guess. I assume we would just use a uniform distribution to model how much each source went into the final guess? but this is clearly terribly off in many cases so we lose a lot of information.
If we link to models we hold a lot more information down the chain.
Overall I wouldn’t say my proposition isn’t a full substitute for your idea, but I think there is overlapping functionality.