I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and “non-notable” things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a “neutral point of view” which, I think, is always in danger of describing a linguistic style rather than “real” neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is true (including all sorts of false claims) and allow them to compete in a transparent way, so that people can see exactly why allegedly false views are being rejected. In short: Wikipedia avoids going into the weeds; I propose steering straight into them. In addition, I think that having a different social structure is valuable: Wikipedia has its human editors trying to delete the crap, my site would have its algorithms fed with crowdsourced judgement trying to move the best stuff to the top (crap left intact). Different people will be attracted to the two sites (which is good because people like me who do not contribute to Wikipedia are an untapped resource).
I would also point out that rigor is not required or expected of users… instead I see the challenge as one of creating an incentive structure that (i) rewards a combination of rigor and clarity and (ii) when rigor is not possible due to data limitations, rewards reasonable and clear interpretations of whatever data is available.
I agree that it won’t be profitable quickly, if ever. I expect its value to be based on network effects, like Facebook or Twitter (sites that did succeed despite starting without a network, mind you). But I wonder if modern AI technology could enable a new way to start the site without the network, e.g. by starting it with numerous concurrent users who are all GPT4, plus other GPT4s rating the quality of earlier output… thus enabling a rapid iteration process where we observe the system’s epistemic failure modes and fix them. It seems like cheating, but if that’s the only way to succeed, so be it. (AIs might provide too much hallucinated evidence that ultimately needs to be deleted, though, which would mean AIs didn’t really solve the “network effect” problem, but they could still be helpful for testing purposes and for other specific tasks like automatic summarization.)
I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and “non-notable” things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a “neutral point of view” which, I think, is always in danger of describing a linguistic style rather than “real” neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is true (including all sorts of false claims) and allow them to compete in a transparent way, so that people can see exactly why allegedly false views are being rejected. In short: Wikipedia avoids going into the weeds; I propose steering straight into them. In addition, I think that having a different social structure is valuable: Wikipedia has its human editors trying to delete the crap, my site would have its algorithms fed with crowdsourced judgement trying to move the best stuff to the top (crap left intact). Different people will be attracted to the two sites (which is good because people like me who do not contribute to Wikipedia are an untapped resource).
I would also point out that rigor is not required or expected of users… instead I see the challenge as one of creating an incentive structure that (i) rewards a combination of rigor and clarity and (ii) when rigor is not possible due to data limitations, rewards reasonable and clear interpretations of whatever data is available.
I agree that it won’t be profitable quickly, if ever. I expect its value to be based on network effects, like Facebook or Twitter (sites that did succeed despite starting without a network, mind you). But I wonder if modern AI technology could enable a new way to start the site without the network, e.g. by starting it with numerous concurrent users who are all GPT4, plus other GPT4s rating the quality of earlier output… thus enabling a rapid iteration process where we observe the system’s epistemic failure modes and fix them. It seems like cheating, but if that’s the only way to succeed, so be it. (AIs might provide too much hallucinated evidence that ultimately needs to be deleted, though, which would mean AIs didn’t really solve the “network effect” problem, but they could still be helpful for testing purposes and for other specific tasks like automatic summarization.)