Hey Animal Ask team and Amy, thanks a lot for the blog, really insightful!
I thought of a super time-consuming way of evaluating impact: have the team do their own research and then do the same research yourself and compare results. Maybe if you do this on a relatively small project, you will see the objective difference in results. Of course, you still have counterfactuals to deal with as you will eventually go with only a couple of intervention choices.
Also in my own consultancy work, I’ve found that it takes years sometimes to track results, so I guess it will be the case with you too. Looking forward to reading your post about progress in a year’s time!
Thanks so much for your important work for the animals and I hope to collaborate on something soon!
Hey Sofia, Great idea. Groups have usually indicated they would spend <10% of the time we spend researching without our involvement, so this seems like a more viable idea than one may expect. There are some reasons this may not entirely cross-apply to the rest of our work. Such as concerns with groups anchoring too much to their more shallow research, which usually results in more optimistic assessments (Optimizer’s Curse). Or possibly a selection effect with the groups that are willing to do this being more likely to make better decisions. We are tracking the asks other similar organisations are using in the regions or areas we have worked in to. This gives us some sense of this, but a more direct experiment of this kind could be valuable. Particularly if we ran it with a few groups using different advocacy methods. We will look into the idea more as well as some of the other ways we could amend our pre/post surveys before we partner with the next group!
Hey Animal Ask team and Amy, thanks a lot for the blog, really insightful!
I thought of a super time-consuming way of evaluating impact: have the team do their own research and then do the same research yourself and compare results. Maybe if you do this on a relatively small project, you will see the objective difference in results. Of course, you still have counterfactuals to deal with as you will eventually go with only a couple of intervention choices.
Also in my own consultancy work, I’ve found that it takes years sometimes to track results, so I guess it will be the case with you too. Looking forward to reading your post about progress in a year’s time!
Thanks so much for your important work for the animals and I hope to collaborate on something soon!
Hey Sofia, Great idea. Groups have usually indicated they would spend <10% of the time we spend researching without our involvement, so this seems like a more viable idea than one may expect. There are some reasons this may not entirely cross-apply to the rest of our work. Such as concerns with groups anchoring too much to their more shallow research, which usually results in more optimistic assessments (Optimizer’s Curse). Or possibly a selection effect with the groups that are willing to do this being more likely to make better decisions. We are tracking the asks other similar organisations are using in the regions or areas we have worked in to. This gives us some sense of this, but a more direct experiment of this kind could be valuable. Particularly if we ran it with a few groups using different advocacy methods. We will look into the idea more as well as some of the other ways we could amend our pre/post surveys before we partner with the next group!