I wonder, what were the obstacles that didn’t allow you to make cost-effectiveness analysis for ProVeg?: “For ProVeg in particular, we believe that our best estimate of their cost effectiveness is too speculative to feature in our review or include as a significant factor in our evaluation of their effectiveness.” Is this supported by a promise that they will measure the cost-effectiveness in the future? And why it was possible to do the evaluation for other stand out charities and not for this particular one? Finally why do you think a cost-effectiveness analysis is not “a significant factor in our evaluation of their effectiveness”.
Our cost-effectiveness estimates are for the relatively short-term, direct impact of each charity. They are estimates of the average cost-effectiveness of a charity over the last year. If the majority of a charity’s programs (by budget) are indirect and/or long-term in their outcomes, we’ve found that our cost-effectiveness estimates for that charity are too uncertain to be useful. (We would not publish a cost-effectiveness estimate for only some of their programs, so as not to risk that estimate being taken as an estimate of the cost-effectiveness of the charities activities as a whole.) This was the case with ProVeg; most of their programs have relatively indirect and/or long-term impact. ProVeg is something of a unique case however, as their V-labelling program, which makes up a significant proportion of their expenditure, is mostly indirect in impact, but is also revenue generating.
Speaking more generally, when making recommendation decisions to donors, we are most interested in marginal cost-effectiveness, or the cost-effectiveness of additional funding to a charity. All of our evaluation criteria are indicators of marginal cost-effectiveness. Our quantitative cost-effectiveness estimates are an important indicator of marginal cost-effectiveness, but they are not necessary or sufficient for estimating marginal cost-effectiveness. If we were to only recommend charities for which we could produce these estimates, we would be biasing ourselves in favor of more measurable short-term outcomes, at the cost of promising long-term or indirect change.
As more research becomes available, we hope to have a better understanding of the long-term and less direct outcomes of different interventions. At that point, we will be able to produce more useful estimates for long-term and indirect change.
I wonder, what were the obstacles that didn’t allow you to make cost-effectiveness analysis for ProVeg?: “For ProVeg in particular, we believe that our best estimate of their cost effectiveness is too speculative to feature in our review or include as a significant factor in our evaluation of their effectiveness.” Is this supported by a promise that they will measure the cost-effectiveness in the future? And why it was possible to do the evaluation for other stand out charities and not for this particular one? Finally why do you think a cost-effectiveness analysis is not “a significant factor in our evaluation of their effectiveness”.
Our cost-effectiveness estimates are for the relatively short-term, direct impact of each charity. They are estimates of the average cost-effectiveness of a charity over the last year. If the majority of a charity’s programs (by budget) are indirect and/or long-term in their outcomes, we’ve found that our cost-effectiveness estimates for that charity are too uncertain to be useful. (We would not publish a cost-effectiveness estimate for only some of their programs, so as not to risk that estimate being taken as an estimate of the cost-effectiveness of the charities activities as a whole.) This was the case with ProVeg; most of their programs have relatively indirect and/or long-term impact. ProVeg is something of a unique case however, as their V-labelling program, which makes up a significant proportion of their expenditure, is mostly indirect in impact, but is also revenue generating.
Speaking more generally, when making recommendation decisions to donors, we are most interested in marginal cost-effectiveness, or the cost-effectiveness of additional funding to a charity. All of our evaluation criteria are indicators of marginal cost-effectiveness. Our quantitative cost-effectiveness estimates are an important indicator of marginal cost-effectiveness, but they are not necessary or sufficient for estimating marginal cost-effectiveness. If we were to only recommend charities for which we could produce these estimates, we would be biasing ourselves in favor of more measurable short-term outcomes, at the cost of promising long-term or indirect change.
As more research becomes available, we hope to have a better understanding of the long-term and less direct outcomes of different interventions. At that point, we will be able to produce more useful estimates for long-term and indirect change.