I agree the complexity level question is a tough question, although my impression has been that it could probably be implemented with varying levels of complexity (e.g., just focusing on simpler/more-objective characteristics like “data source used” or “experimental methodology” vs. also including theoretical arguments and assumptions). I think the primary users would tend to be researchers—who might then translate the findings into more-familiar terms or representations for the policymaker, especially if it does not become popular/widespread enough for some policymakers to be familiar with how to use or interpret it (somewhat similar to regression tables and similar analyses). That being said, I also see it as plausible that some policymakers would have enough basic understanding of the system to engage/explore on their own—like how some policymakers may be able to directly evaluate some regression findings.
Ultimately, two examples of the primary use cases I envision are:
Identifying the ripple effects of changes in assumptions/beliefs/datasets/etc. Suppose for example an experimental finding or dataset which influenced dozens of studies is shown to be flawed: it would be helpful to have an initial outline of what claims and assumptions need to be reevaluated in light of the new finding.
Mapping the debate for a somewhat contentious subject (or just anything where the literature is not in agreement), including by identifying if any claims have been left unsupported or unchallenged.
It seems that such insights might be helpful for a researcher trying to decide what to focus on (and/or a grantmaker trying to decide what research to fund).
Cool—my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case?
Perhaps we could schedule a call to talk further—I’ll send you a DM!
I agree the complexity level question is a tough question, although my impression has been that it could probably be implemented with varying levels of complexity (e.g., just focusing on simpler/more-objective characteristics like “data source used” or “experimental methodology” vs. also including theoretical arguments and assumptions). I think the primary users would tend to be researchers—who might then translate the findings into more-familiar terms or representations for the policymaker, especially if it does not become popular/widespread enough for some policymakers to be familiar with how to use or interpret it (somewhat similar to regression tables and similar analyses). That being said, I also see it as plausible that some policymakers would have enough basic understanding of the system to engage/explore on their own—like how some policymakers may be able to directly evaluate some regression findings.
Ultimately, two examples of the primary use cases I envision are:
Identifying the ripple effects of changes in assumptions/beliefs/datasets/etc. Suppose for example an experimental finding or dataset which influenced dozens of studies is shown to be flawed: it would be helpful to have an initial outline of what claims and assumptions need to be reevaluated in light of the new finding.
Mapping the debate for a somewhat contentious subject (or just anything where the literature is not in agreement), including by identifying if any claims have been left unsupported or unchallenged.
It seems that such insights might be helpful for a researcher trying to decide what to focus on (and/or a grantmaker trying to decide what research to fund).
Cool—my immediate thought is that it would be interesting to see a case study of (1) and/or (2) - do you know of this being done for any specific case? Perhaps we could schedule a call to talk further—I’ll send you a DM!