Yeah, I definitely felt that one of the downsides of Semantica Pro (or at least, my version of it) was the lack of quantitative or even logic (if-then) functionality, which in my mind would be a crucial feature. For example, I would want to see some kind of logical system that flags claims that depend on an assumption/claim/study that is shown to be flawed (and thus the flagged claims may need to be reevaluated). In my most recent research project, for example, I found a study which used a (seemingly/arguably) flawed experiment design for testing prediction market incentive structures, produced a finding which was seemingly counterintuitive (at least before taking into account the flaws in experiment design), and then went on to be cited by ~50-100 other studies with some of them even referencing it as the basis for their experimental design.
(Venting aside) I’m definitely interested in exploring the idea further.
Yeah, I definitely felt that one of the downsides of Semantica Pro (or at least, my version of it) was the lack of quantitative or even logic (if-then) functionality, which in my mind would be a crucial feature. For example, I would want to see some kind of logical system that flags claims that depend on an assumption/claim/study that is shown to be flawed (and thus the flagged claims may need to be reevaluated). In my most recent research project, for example, I found a study which used a (seemingly/arguably) flawed experiment design for testing prediction market incentive structures, produced a finding which was seemingly counterintuitive (at least before taking into account the flaws in experiment design), and then went on to be cited by ~50-100 other studies with some of them even referencing it as the basis for their experimental design.
(Venting aside) I’m definitely interested in exploring the idea further.