Thank you for doing this work! I really admire the rigor of this process. I’m really curious to hear how this work is received by (1) other evaluation orgs and (2) mental health experts. Have you received any such feedback so far? Has it been easy to explain? Have you had to defend any particular aspect of it in conversations with outsiders?
I do have one piece of feedback. You have included a data visualization here that, if you’ll forgive me for saying so, is trying to tell a story without seeming to care about the listener. There is simply too much going on in the viz for it to be useful.
I think a visualization can be extremely useful here in communicating various aspects of your process and its results, but cramming all of this information into a single pane makes the chart essentially unreadable; there are too many axes that the viewer needs to understand simultaneously.
I’m not sure exactly what you wanted to highlight in the visualization, but if you want to demonstrate the simple correlation between mechanical and intuitive estimates, a simple scatterplot will do, without the extra colors and shapes. On the other hand, if that extra information is substantive, it should really be in separate panes for the sake of comprehensibility. Here’s a quick example with your data (direct link to a larger version here):
I don’t think this is the best possible version of this chart (I’d guess it’s too wide, and opinions differ as to whether all axes should start at 0), but it’s an example of how you might tell multiple stories in a slightly more readable way. The linear trend is visible in each plot, it’s easier to make out the screening sizes, and I’ve outlined the axes delineating the four quadrants of each pane in order to highlight the fact that mostly top-scoring programmes on both measures were included in Round 2.
Feel free to take this with as much salt as necessary. I’m working from my own experience, which is that communicating data has tended to take just as much work on the communication as it does on the data.
Hi Matt. Thanks for your concrete suggestions on the data visualisation. I think we made the mistake of adding more and more information without re-thinking what exactly we’re trying to show.
On how the work is being received by other evaluation orgs: I’m not too sure. I suspect other orgs will be more interested in how we do the final evaluation, rather than the preliminary filtering. Hopefully we’ll also get more feedback this weekend at EAGxVirtual (Jasper is giving a talk).
And from mental health experts: My impression from speaking to several academics is that there’s a real effort in global mental health (GMH) at the moment to show that cost-effective interventions exist (this being important to policy-makers) - see e.g. Levin & Chisholm (2016) and WHO draft menu of cost-effective interventions. We have also had quite a few senior researchers offering their support or advice. We hope that our work on cost-effectiveness of micro-interventions will be useful as part of this wider context. One person we spoke to said that a systematic review, perhaps done in collaboration with a university, would be taken more seriously by academics than our current plan. This seems very likely to be true, with the obvious downside that it would be a lot more work.
Thank you for doing this work! I really admire the rigor of this process. I’m really curious to hear how this work is received by (1) other evaluation orgs and (2) mental health experts. Have you received any such feedback so far? Has it been easy to explain? Have you had to defend any particular aspect of it in conversations with outsiders?
I do have one piece of feedback. You have included a data visualization here that, if you’ll forgive me for saying so, is trying to tell a story without seeming to care about the listener. There is simply too much going on in the viz for it to be useful.
I think a visualization can be extremely useful here in communicating various aspects of your process and its results, but cramming all of this information into a single pane makes the chart essentially unreadable; there are too many axes that the viewer needs to understand simultaneously.
I’m not sure exactly what you wanted to highlight in the visualization, but if you want to demonstrate the simple correlation between mechanical and intuitive estimates, a simple scatterplot will do, without the extra colors and shapes. On the other hand, if that extra information is substantive, it should really be in separate panes for the sake of comprehensibility. Here’s a quick example with your data (direct link to a larger version here):
I don’t think this is the best possible version of this chart (I’d guess it’s too wide, and opinions differ as to whether all axes should start at 0), but it’s an example of how you might tell multiple stories in a slightly more readable way. The linear trend is visible in each plot, it’s easier to make out the screening sizes, and I’ve outlined the axes delineating the four quadrants of each pane in order to highlight the fact that mostly top-scoring programmes on both measures were included in Round 2.
Feel free to take this with as much salt as necessary. I’m working from my own experience, which is that communicating data has tended to take just as much work on the communication as it does on the data.
Hi Matt. Thanks for your concrete suggestions on the data visualisation. I think we made the mistake of adding more and more information without re-thinking what exactly we’re trying to show.
On how the work is being received by other evaluation orgs: I’m not too sure. I suspect other orgs will be more interested in how we do the final evaluation, rather than the preliminary filtering. Hopefully we’ll also get more feedback this weekend at EAGxVirtual (Jasper is giving a talk).
And from mental health experts: My impression from speaking to several academics is that there’s a real effort in global mental health (GMH) at the moment to show that cost-effective interventions exist (this being important to policy-makers) - see e.g. Levin & Chisholm (2016) and WHO draft menu of cost-effective interventions. We have also had quite a few senior researchers offering their support or advice. We hope that our work on cost-effectiveness of micro-interventions will be useful as part of this wider context. One person we spoke to said that a systematic review, perhaps done in collaboration with a university, would be taken more seriously by academics than our current plan. This seems very likely to be true, with the obvious downside that it would be a lot more work.