Hey, thanks for doing this! I think you did a good job at considering most of the uncertainties. My main disagreement would be that this is a moderate limitation:
I think that it is a major limitation. In general, since it seems that most of the work is done by volunteers, the situation reminds me of an example I gave in this article:
Imagine many volunteers collaborating to do a lot of good, and having a small budget for snacks. Their cost-effectiveness estimate could be very high, but it would be a mistake to expect their impact to double if we double their funding for snacks.
You could imagine that program being run without any paid staff and any expenses, and having an infinite cost-effectiveness. But it wouldn’t follow that this is a good opportunity for donors. If volunteer involvement is the major reason for cost-effectiveness, I don’t see a reason to think why the cost-effectiveness of related activities like expanding advertising and accelerating the development of an app would be at all similar to the cost-effectiveness of the program so far. These seem to be totally different activities.
That said, the cost-effectiveness estimate does inform us that expanding the program into more geographic locations, cultures and languages could be promising. But if this is what you use the cost-effectiveness estimate for, maybe you shouldn’t adjust the estimates of future costs towards the lower present day costs, because in that case the set up costs are relevant. Also, in that case I wouldn’t feature the cost-effectiveness figures so prominently in this analysis if the target audience is Israelis wanting to donate to local charities.
Another thing is that if participants switched to a vegetarian diet and started eating more eggs to get enough protein instead of eating beef or lamb, the program might have caused more suffering than it prevented (see http://ethical.diet/). I imagine that they were encouraged to get their proteins in other ways though, but it is still something to consider.
Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.
Some more thoughts:
If someone were to look into the program deeper, maybe it is possible to run an RCT. You could randomly assign some participants who sign up to the program to the control group. You could just send a document to these people explaining how to be vegetarian or something, and not make them join any groups. And then you could send them the same questionnaire to them as to others. This has some flaws but would be better than nothing. But it’s pretty clear that the program should continue so I don’t think it’s worth the effort.
Another mildly useful thing to do would be to check if meat consumption in Israel has gone down in general, and use that as a control group. It could be mildly useful because it would make you dismiss hypotheses that they consumed less meat because of unrelated reasons that apply to all Israelis like increased meat prices, or increased availability of plant-based options, or meat-related health scare.
Looks promising! I was hoping for a breakdown by type of animal product and animals spared, but it looks like the data for this wasn’t collected.
We could divide their animal product consumption reduction proportionally the same way as the their local population, or maybe flexitarians/reducetarians specifically, so we assume they reduced their animal product consumption uniformly and had similar proportions as their local populations. This might give us a rough ballpark estimate in terms of animals spared. (Also setting aside issues of supply and demand, international trade, etc..)
I agree it would be nicer to report actual spared animals, rather than generic “portions of meat”. We thought of using data about the average meat diet in the relevant countries, to be able to translate portions of meat into animal lives. But we eventually decided against it, because it would introduce even more assumptions and uncertainties into our analysis, which we felt had many uncertainties already. Given the amount of uncertainty that we already have (with over an order-of-magnitude between our lower and upper bounds), we felt that giving a too detailed breakdown might be inappropriate. In the end we decided to keep it simple and use the metric we had data on, hoping that “1 to 12 portions of meat per 1 ILS” would give readers a rough sense of the potential of this program to spare animal lives.
Hey, thanks for doing this! I think you did a good job at considering most of the uncertainties. My main disagreement would be that this is a moderate limitation:
I think that it is a major limitation. In general, since it seems that most of the work is done by volunteers, the situation reminds me of an example I gave in this article:
You could imagine that program being run without any paid staff and any expenses, and having an infinite cost-effectiveness. But it wouldn’t follow that this is a good opportunity for donors. If volunteer involvement is the major reason for cost-effectiveness, I don’t see a reason to think why the cost-effectiveness of related activities like expanding advertising and accelerating the development of an app would be at all similar to the cost-effectiveness of the program so far. These seem to be totally different activities.
That said, the cost-effectiveness estimate does inform us that expanding the program into more geographic locations, cultures and languages could be promising. But if this is what you use the cost-effectiveness estimate for, maybe you shouldn’t adjust the estimates of future costs towards the lower present day costs, because in that case the set up costs are relevant. Also, in that case I wouldn’t feature the cost-effectiveness figures so prominently in this analysis if the target audience is Israelis wanting to donate to local charities.
Another thing is that if participants switched to a vegetarian diet and started eating more eggs to get enough protein instead of eating beef or lamb, the program might have caused more suffering than it prevented (see http://ethical.diet/). I imagine that they were encouraged to get their proteins in other ways though, but it is still something to consider.
Thank you for your great feedback and suggestions! (and sorry for not responding sooner)
I guess that one’s meaning for a “major” or “moderate” limitation is, in the end, contingent on their aspirations. If we had the standards of an organization like GiveWell, this would most certainly be a very big limitation. But quite early on we understood that we did not have the data to be able to support as strong conclusions about cost-effectiveness as GiveWell’s recommendations. Rather, our approach was: let’s do the best we can with the data we have at hand, and simply make sure that we are very clear and transparent about the limitations of our analysis. The biggest limitation of this analysis is the lack of experimental data (with only observational data available). We wanted to make sure this got the most eye-catching label. In the end we believe that what’s important is that readers of the report (or just of the executive summary) get a good sense of what conclusions are justified given our analysis and which aren’t, and that they understand what the important limitations of the analysis are. We totally agree with your arguments and the fact that past cost-effectiveness is by no means proof of future cost-effectiveness given more funding (though we do think there are reasons for cautious optimism in the case of Animals Now).
Also, thank you for the interesting suggestion for an RCT study design. This is something we have been considering in general, but haven’t thought of your exact idea. However, to approach anything like that that, we would first need the charity to have a strong motivation to get into that adventure.
Some more thoughts: If someone were to look into the program deeper, maybe it is possible to run an RCT. You could randomly assign some participants who sign up to the program to the control group. You could just send a document to these people explaining how to be vegetarian or something, and not make them join any groups. And then you could send them the same questionnaire to them as to others. This has some flaws but would be better than nothing. But it’s pretty clear that the program should continue so I don’t think it’s worth the effort.
Another mildly useful thing to do would be to check if meat consumption in Israel has gone down in general, and use that as a control group. It could be mildly useful because it would make you dismiss hypotheses that they consumed less meat because of unrelated reasons that apply to all Israelis like increased meat prices, or increased availability of plant-based options, or meat-related health scare.
Related: https://veganuary.com/what-impact-does-veganuary-have/
Looks promising! I was hoping for a breakdown by type of animal product and animals spared, but it looks like the data for this wasn’t collected.
We could divide their animal product consumption reduction proportionally the same way as the their local population, or maybe flexitarians/reducetarians specifically, so we assume they reduced their animal product consumption uniformly and had similar proportions as their local populations. This might give us a rough ballpark estimate in terms of animals spared. (Also setting aside issues of supply and demand, international trade, etc..)
Thank you!
I agree it would be nicer to report actual spared animals, rather than generic “portions of meat”. We thought of using data about the average meat diet in the relevant countries, to be able to translate portions of meat into animal lives. But we eventually decided against it, because it would introduce even more assumptions and uncertainties into our analysis, which we felt had many uncertainties already. Given the amount of uncertainty that we already have (with over an order-of-magnitude between our lower and upper bounds), we felt that giving a too detailed breakdown might be inappropriate. In the end we decided to keep it simple and use the metric we had data on, hoping that “1 to 12 portions of meat per 1 ILS” would give readers a rough sense of the potential of this program to spare animal lives.