Nice. I think we could model this to see how ease/cost of evaluation interacts with other terms when assessing overall choice-worthiness. In your example the intuition sails through because A is only marginally cheaper to implement, while B is much cheaper to evaluate. I’d like to figure out precisely when lower evaluative costs outweigh lower implementation costs, and what that depends on.
Your post is also akin to a preference for good feedback loops when evaluating projects, which some orgs value highly.
Yep, agree that this is similar to feedback loops, but I feel like people talking about feedback loops are more focused on the timescales for evaluation, rather than on timescale and quality and cost.
Would be interesting to see work looking at how precisely we should make trade offs between expected value, quality of evidence and potential for ongoing evaluation.
I think it might make sense to divide quality of evidence into quality of existing evidence and potential for ongoing evaluation.
Nice. I think we could model this to see how ease/cost of evaluation interacts with other terms when assessing overall choice-worthiness. In your example the intuition sails through because A is only marginally cheaper to implement, while B is much cheaper to evaluate. I’d like to figure out precisely when lower evaluative costs outweigh lower implementation costs, and what that depends on.
Your post is also akin to a preference for good feedback loops when evaluating projects, which some orgs value highly.
Yep, agree that this is similar to feedback loops, but I feel like people talking about feedback loops are more focused on the timescales for evaluation, rather than on timescale and quality and cost.
Would be interesting to see work looking at how precisely we should make trade offs between expected value, quality of evidence and potential for ongoing evaluation.
I think it might make sense to divide quality of evidence into quality of existing evidence and potential for ongoing evaluation.