Thanks for engaging! I’ll speak for myself here, though others might chime in or have different thoughts.
How do you determine if you’re asking the right questions?
Generally we ask our clients at the start something along the lines of “what question is this report trying to help answer for you?”. Often this is fairly straightforward, like “is this worth funding”, or “is this worth more researcher hours in exploring”. And we will often push back or add things to the brief to make sure we include what is most decision-relevant within the timeframe we are allocated. An example of this is when we were asked to look into the landscape of the philanthropy spending for cause area X, but it turns out that excluding the non-philanthropic spending might end up being pretty decision relevant, so we suggested incorporating that into the report.
We have multiple check-ins with our client to make sure the information we get is the kind of information they want, and to have opportunities to pivot if new questions come up as a result of what we find that might be more decision-relevant.
What is your process for judging information quality?
I don’t think we have a formalised organisational-level process around this; and I think this is just fairly general research appraisal stuff that we do independently. There’s a tradeoff between following a thorough process and speed; it might be clear on skimming that this study is much less updating because of its recruitment or allocation etc, but if we needed to e.g. MMAT every study we read this would be pretty time consuming. In general we try to transparently communicate what we’ve done in check-ins with each other, with our client, and in our reports, so they’re aware of limitations in the search and our conclusions.
Do you employ any audits or tools to identify/correct biases (e.g. what studies you select, whom you decide to interview, etc.)?
Can you give me an example of a tool to identify biases in the above? I assume you aren’t referring to tools that we can use to appraise individual studies/reviews but one level above that?
RE: interviews, one approach we frequently take is to look for key papers or reports in the field that are most likely to be decision-relevant and reach out to its author. Sometimes we will intentionally aim to find views that push us in opposing sides of the potential decision. Other times we just need technical expertise in an area that our team doesn’t have. Generally we will reach out to the client with the list to make sure they’re happy with the choices we’ve made, which is intended to reduce doubling up on the same expert, but also serves as a checkpoint I guess.
We don’t have audits but we do have internal reviews, though admittedly I think our current process is unlikely to pick up issues around interviewee selection unless the reviewer is well connected in this space, and it will similarly likely only pick up issues in study selection if the reviewer knows specific papers or have some strong priors around the existence of stronger evidence on this topic. My guess is that the likelihood of the audits making meaningful changes to our report is sufficiently low that if it takes more than a few days it just wouldn’t be worth the time for most of the reports we are doing. That being said, it might be a reasonable thing to consider as part of a separate retrospective review of previous reports etc! Do you have any suggestions here or are there good approaches you know about / have seen?
Apologies…I mean the questions your team decides upon during your research and interview processes (not the initial prompt/project question). As generalist, do you ever work with domain experts to help frame the questions (not just get answers)?
Re: Audit tools
I realize that tools might have sounded like software or something, but I’m thinking more of frameworks that can help to weed out potential biases in data sets (ex. algorithm bias, clustering illusion, etc.), studies (ex., publication bias, parachute science, etc.), and individuals (ex. cognitive bias(es), appeal to authority, etc.). I’m not suggesting you encounter these specific biases with your research, but I imagine there are known (and unknown) biases you have to check for and assess.
Re: Possible approach for less bias
Again, I’m not a professional researcher, so I don’t want to assume I have anything novel to add here. That said, when I read about research and/or macro analysis, I see a lot of emphasis on things like selection and study design — but not as much on the curation or review teams i.e. who decides?
My intuition tells me that — along with study designs — curation and review are particularly important to weeding out bias. (The merry-go-round water pump story in Doing Good Better comes to mind.) You mentioned sometimes interviewing differing or opposing views, but I imagine these are inside the research itself and are usually with other academics or recognized domain experts (please correct me if I’m wrong).
So, in the case of say, a project by an org from the Global North that would lead to action/policy/capital allocation in/for the Global South, it would seem that local experts should also have a “seat at the table” — not just in providing data — but in curating/reviewing/concluding as well.
Thanks for engaging! I’ll speak for myself here, though others might chime in or have different thoughts.
How do you determine if you’re asking the right questions?
Generally we ask our clients at the start something along the lines of “what question is this report trying to help answer for you?”. Often this is fairly straightforward, like “is this worth funding”, or “is this worth more researcher hours in exploring”. And we will often push back or add things to the brief to make sure we include what is most decision-relevant within the timeframe we are allocated. An example of this is when we were asked to look into the landscape of the philanthropy spending for cause area X, but it turns out that excluding the non-philanthropic spending might end up being pretty decision relevant, so we suggested incorporating that into the report.
We have multiple check-ins with our client to make sure the information we get is the kind of information they want, and to have opportunities to pivot if new questions come up as a result of what we find that might be more decision-relevant.
What is your process for judging information quality?
I don’t think we have a formalised organisational-level process around this; and I think this is just fairly general research appraisal stuff that we do independently. There’s a tradeoff between following a thorough process and speed; it might be clear on skimming that this study is much less updating because of its recruitment or allocation etc, but if we needed to e.g. MMAT every study we read this would be pretty time consuming. In general we try to transparently communicate what we’ve done in check-ins with each other, with our client, and in our reports, so they’re aware of limitations in the search and our conclusions.
Do you employ any audits or tools to identify/correct biases (e.g. what studies you select, whom you decide to interview, etc.)?
Can you give me an example of a tool to identify biases in the above? I assume you aren’t referring to tools that we can use to appraise individual studies/reviews but one level above that?
RE: interviews, one approach we frequently take is to look for key papers or reports in the field that are most likely to be decision-relevant and reach out to its author. Sometimes we will intentionally aim to find views that push us in opposing sides of the potential decision. Other times we just need technical expertise in an area that our team doesn’t have. Generally we will reach out to the client with the list to make sure they’re happy with the choices we’ve made, which is intended to reduce doubling up on the same expert, but also serves as a checkpoint I guess.
We don’t have audits but we do have internal reviews, though admittedly I think our current process is unlikely to pick up issues around interviewee selection unless the reviewer is well connected in this space, and it will similarly likely only pick up issues in study selection if the reviewer knows specific papers or have some strong priors around the existence of stronger evidence on this topic. My guess is that the likelihood of the audits making meaningful changes to our report is sufficiently low that if it takes more than a few days it just wouldn’t be worth the time for most of the reports we are doing. That being said, it might be a reasonable thing to consider as part of a separate retrospective review of previous reports etc! Do you have any suggestions here or are there good approaches you know about / have seen?
Thanks for your explanations!
Re: Questions
Apologies…I mean the questions your team decides upon during your research and interview processes (not the initial prompt/project question). As generalist, do you ever work with domain experts to help frame the questions (not just get answers)?
Re: Audit tools
I realize that tools might have sounded like software or something, but I’m thinking more of frameworks that can help to weed out potential biases in data sets (ex. algorithm bias, clustering illusion, etc.), studies (ex., publication bias, parachute science, etc.), and individuals (ex. cognitive bias(es), appeal to authority, etc.). I’m not suggesting you encounter these specific biases with your research, but I imagine there are known (and unknown) biases you have to check for and assess.
Re: Possible approach for less bias
Again, I’m not a professional researcher, so I don’t want to assume I have anything novel to add here. That said, when I read about research and/or macro analysis, I see a lot of emphasis on things like selection and study design — but not as much on the curation or review teams i.e. who decides?
My intuition tells me that — along with study designs — curation and review are particularly important to weeding out bias. (The merry-go-round water pump story in Doing Good Better comes to mind.) You mentioned sometimes interviewing differing or opposing views, but I imagine these are inside the research itself and are usually with other academics or recognized domain experts (please correct me if I’m wrong).
So, in the case of say, a project by an org from the Global North that would lead to action/policy/capital allocation in/for the Global South, it would seem that local experts should also have a “seat at the table” — not just in providing data — but in curating/reviewing/concluding as well.