Kinda blows my mind that this doesn’t have more comments. I’m not really in a great position to help but I have a lot of time I could dedicate to something like this and would be willing to do so if you could point me in the right direction.
You were right, this is one of the least popular ideas around. Perhaps even EAs think the truth is easy to find, that falsehoods aren’t very harmful, or that automation can’t help? I’m confused too. LW liked it a bit more, but not much.
Thanks for writing this! I agree truth seeking is important.
My low confidence intuition from a quick scan of this is that something like your project probably provides a lot of value if completed. However, it seems that it would be very hard to do, probably not profitable for several years, and never particularly profitable. That’s going to make it hard to fund/do. With that in mind, I’d probably look into taking the ideas and motivation and putting them into supporting something else.
With that in mind, have you considered joining the team at Metaculus or Manifold Markets?
I wouldn’t read into the lack of response too much. In recent times, most people who read the EA forum seem to be looking for quick and digestible insights from sources they know and trust. This is true of me also. The consequence is that posts like yours, get overlooked as they dip in and out.
I’m sure working for Metaculus or Manifold or OWID would be great.
I was hoping to get some help thinking of something smaller in scope and/or profitable that could eventually grow into this bigger vision. A few years from now, I might be able to afford to self-fund it by working for free (worth >$100,000 annually) but it’ll be tough with a family of four and I’ve lost the enthusiasm I once had for building things alone with no support (it hasn’t worked out well before). Plus there’s an opportunity cost in terms of my various other ideas. Somehow I have to figure out how to get someone else interested...
Maybe the right approach is not “developing” / “creating” this, but shifting the systems that are already partway there. You might have a bigger impact if you were working with Wikipedia to shift it more towards the kind of system you would like, for example.
I really doubt that something like this would be profitable quickly, on the grounds that its utility would be derived from its rigour and… Well, people take a while to notice the utility of rigour.
I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and “non-notable” things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a “neutral point of view” which, I think, is always in danger of describing a linguistic style rather than “real” neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is true (including all sorts of false claims) and allow them to compete in a transparent way, so that people can see exactly why allegedly false views are being rejected. In short: Wikipedia avoids going into the weeds; I propose steering straight into them. In addition, I think that having a different social structure is valuable: Wikipedia has its human editors trying to delete the crap, my site would have its algorithms fed with crowdsourced judgement trying to move the best stuff to the top (crap left intact). Different people will be attracted to the two sites (which is good because people like me who do not contribute to Wikipedia are an untapped resource).
I would also point out that rigor is not required or expected of users… instead I see the challenge as one of creating an incentive structure that (i) rewards a combination of rigor and clarity and (ii) when rigor is not possible due to data limitations, rewards reasonable and clear interpretations of whatever data is available.
I agree that it won’t be profitable quickly, if ever. I expect its value to be based on network effects, like Facebook or Twitter (sites that did succeed despite starting without a network, mind you). But I wonder if modern AI technology could enable a new way to start the site without the network, e.g. by starting it with numerous concurrent users who are all GPT4, plus other GPT4s rating the quality of earlier output… thus enabling a rapid iteration process where we observe the system’s epistemic failure modes and fix them. It seems like cheating, but if that’s the only way to succeed, so be it. (AIs might provide too much hallucinated evidence that ultimately needs to be deleted, though, which would mean AIs didn’t really solve the “network effect” problem, but they could still be helpful for testing purposes and for other specific tasks like automatic summarization.)
Kinda blows my mind that this doesn’t have more comments. I’m not really in a great position to help but I have a lot of time I could dedicate to something like this and would be willing to do so if you could point me in the right direction.
Well, it’s new. There are some comments on LW. Currently I’m not ready to put much time in this, but what are your areas of expertise?
I am familiar with a few different areas, but I don’t think I have a lot of expertise (hence why I said I’m not in a great position to help).
You were right, this is one of the least popular ideas around. Perhaps even EAs think the truth is easy to find, that falsehoods aren’t very harmful, or that automation can’t help? I’m confused too. LW liked it a bit more, but not much.
Thanks for writing this! I agree truth seeking is important.
My low confidence intuition from a quick scan of this is that something like your project probably provides a lot of value if completed. However, it seems that it would be very hard to do, probably not profitable for several years, and never particularly profitable. That’s going to make it hard to fund/do. With that in mind, I’d probably look into taking the ideas and motivation and putting them into supporting something else.
With that in mind, have you considered joining the team at Metaculus or Manifold Markets?
I wouldn’t read into the lack of response too much. In recent times, most people who read the EA forum seem to be looking for quick and digestible insights from sources they know and trust. This is true of me also. The consequence is that posts like yours, get overlooked as they dip in and out.
I’m sure working for Metaculus or Manifold or OWID would be great.
I was hoping to get some help thinking of something smaller in scope and/or profitable that could eventually grow into this bigger vision. A few years from now, I might be able to afford to self-fund it by working for free (worth >$100,000 annually) but it’ll be tough with a family of four and I’ve lost the enthusiasm I once had for building things alone with no support (it hasn’t worked out well before). Plus there’s an opportunity cost in terms of my various other ideas. Somehow I have to figure out how to get someone else interested...
Maybe the right approach is not “developing” / “creating” this, but shifting the systems that are already partway there. You might have a bigger impact if you were working with Wikipedia to shift it more towards the kind of system you would like, for example.
I really doubt that something like this would be profitable quickly, on the grounds that its utility would be derived from its rigour and… Well, people take a while to notice the utility of rigour.
I see this as a fundamentally different project than Wikipedia. Wikipedia deliberately excludes primary sources / original research and “non-notable” things, while I am proposing, just as deliberately, to include those things. Wikipedia requires a “neutral point of view” which, I think, is always in danger of describing a linguistic style rather than “real” neutrality (whatever that means). Wikipedia produces a final text that (when it works well) represents a mainstream consensus view of truth, but I am proposing to allow various proposals about what is true (including all sorts of false claims) and allow them to compete in a transparent way, so that people can see exactly why allegedly false views are being rejected. In short: Wikipedia avoids going into the weeds; I propose steering straight into them. In addition, I think that having a different social structure is valuable: Wikipedia has its human editors trying to delete the crap, my site would have its algorithms fed with crowdsourced judgement trying to move the best stuff to the top (crap left intact). Different people will be attracted to the two sites (which is good because people like me who do not contribute to Wikipedia are an untapped resource).
I would also point out that rigor is not required or expected of users… instead I see the challenge as one of creating an incentive structure that (i) rewards a combination of rigor and clarity and (ii) when rigor is not possible due to data limitations, rewards reasonable and clear interpretations of whatever data is available.
I agree that it won’t be profitable quickly, if ever. I expect its value to be based on network effects, like Facebook or Twitter (sites that did succeed despite starting without a network, mind you). But I wonder if modern AI technology could enable a new way to start the site without the network, e.g. by starting it with numerous concurrent users who are all GPT4, plus other GPT4s rating the quality of earlier output… thus enabling a rapid iteration process where we observe the system’s epistemic failure modes and fix them. It seems like cheating, but if that’s the only way to succeed, so be it. (AIs might provide too much hallucinated evidence that ultimately needs to be deleted, though, which would mean AIs didn’t really solve the “network effect” problem, but they could still be helpful for testing purposes and for other specific tasks like automatic summarization.)