Thanks Aaron for this really insightful and thoughtful post. The scale of ambition here is truly promising!
I have a question about the confidence in the effectiveness of the humane slaughter implementations: For a given stunner installation, how confident are you that at least X% of those shrimps are actually being humanely slaughtered?
I’m thinking about this in terms of the curves shown in the attached graph:
Curve A: Lower confidence
Curve B: Medium confidence
Curve C: Higher confidence
Less confident than A
More confident than C
Something else, mix of options, etc.
Which curve best represents your current confidence in the effectiveness? I imagine this relates closely to your focus on “increasing confidence” through better implementation, monitoring, and measurement & evaluation that you mention in the post.
That’s a great question, and you’re exactly right that our “increasing confidence” is focused on answering questions like that.
One of the reasons we started the Humane Slaughter Initiative was to deploy stunners in different regions and contexts in order to remove barriers to uptake. The industry was telling us that humane slaughter wasn’t possible in this or that context for one reason or another. We thought it made sense to try it out and understand the barriers in each context better.
We’re still very much in this learning phase, and due to the variety of contexts we’ve deployed stunners in, there isn’t really a “given stunner”—effectiveness varies significantly by context, equipment type, species, and operational practices. Additionally, we’re exploring New Solutions & Protocols, which further complicates providing a single answer.
What I can say is that:
We’ve seen successful implementation in multiple contexts, but with notable variation
Our monitoring suggests that proper training and ongoing support are critical factors
This variation is exactly why we’re prioritising better M&E systems and implementation support
I’m hesitant to give a specific confidence curve right now because (1) it would likely be context-dependent rather than universal, and (2) improving this is an active focus area for us, so any number I give today could anchor people’s thinking even as we make progress.
It’s a goal of ours to publish more research and data as we collect over the next 12 months. This will help donors and industry partners better understand effectiveness across different contexts. So, stay tuned for those developments in the coming year :)
Thanks Aaron for this really insightful and thoughtful post. The scale of ambition here is truly promising!
I have a question about the confidence in the effectiveness of the humane slaughter implementations: For a given stunner installation, how confident are you that at least X% of those shrimps are actually being humanely slaughtered?
I’m thinking about this in terms of the curves shown in the attached graph:
Curve A: Lower confidence
Curve B: Medium confidence
Curve C: Higher confidence
Less confident than A
More confident than C
Something else, mix of options, etc.
Which curve best represents your current confidence in the effectiveness? I imagine this relates closely to your focus on “increasing confidence” through better implementation, monitoring, and measurement & evaluation that you mention in the post.
Thanks for the kind words, Johannes!
That’s a great question, and you’re exactly right that our “increasing confidence” is focused on answering questions like that.
One of the reasons we started the Humane Slaughter Initiative was to deploy stunners in different regions and contexts in order to remove barriers to uptake. The industry was telling us that humane slaughter wasn’t possible in this or that context for one reason or another. We thought it made sense to try it out and understand the barriers in each context better.
We’re still very much in this learning phase, and due to the variety of contexts we’ve deployed stunners in, there isn’t really a “given stunner”—effectiveness varies significantly by context, equipment type, species, and operational practices. Additionally, we’re exploring New Solutions & Protocols, which further complicates providing a single answer.
What I can say is that:
We’ve seen successful implementation in multiple contexts, but with notable variation
Our monitoring suggests that proper training and ongoing support are critical factors
This variation is exactly why we’re prioritising better M&E systems and implementation support
I’m hesitant to give a specific confidence curve right now because (1) it would likely be context-dependent rather than universal, and (2) improving this is an active focus area for us, so any number I give today could anchor people’s thinking even as we make progress.
It’s a goal of ours to publish more research and data as we collect over the next 12 months. This will help donors and industry partners better understand effectiveness across different contexts. So, stay tuned for those developments in the coming year :)
Thanks for this detailed answer, this makes a lot of sense. Looking forward to the progress updates to come. All the best for your projects!