both you and ACE seem to have different goals (calculating historic cost-effectiveness vs marginal impact of future dollars)
ACE states (under Criterion 2) that a charity’s Cost-Effectiveness Score “indicates, on a 1-7 scale, how cost effective we think the charity has been [...] with higher scores indicating higher cost effectiveness.”
Would you mind clarifying what you believe ACE’s goal is, and what you believe my goal is?
The analysis in my review is entirely about calculating historic cost-effectiveness. ACE’s Cost-Effectiveness Scores are also entirely about calculating historic cost-effectiveness.
From this post, it seems like you’re trying to calculate historic cost-effectiveness and rate charities exclusively on that (since you haven’t published an evaluation of an animal charity yet I could be wrong here though). My understanding of what ACE is trying to do with its evaluations as a whole is identify where marginal dollars might be most useful for animal advocacy, and move money from less effective opportunities to those. Cost-effectiveness might be one component of that, but is far from the only one (e.g. intervention scalability might matter, having a diversity of types of opportunities to appeal to different donors, etc.). It’s pretty easy to imagine scenarios where you wouldn’t prefer to only look at cost-effectiveness of individual charities when making recommendation, even if that’s what matters in the end. It’s also easy to imagine scenarios where recommending less effective opportunities leads to better outcomes to animals—maybe installing shrimp stunners is super effective, but only some donors will give to it. Maybe it can only scale to a few M per year but you influence more money than that. Depending on your circumstances, a lot more than cost-effectiveness of specific interventions matters for making the most effective recommendations.
My understanding is also that ACE doesn’t see EAs as its primary audience (but I’m less certain about this). This is a reason I’m excited about your project—seems nice to have “very EA” evaluations of charities in addition to ACE’s. But, I also imagine it would be hard to get charities to participate in your evaluation process if you don’t run the evaluations by them in advance, which could make it hard for you to get information to do what you’re trying to do, unless you rely on the information ACE collects, which then puts you in an awkward position of making a strong argument against an organization you might need to conduct evaluations.
My understanding is ACE has tried to do something that’s just cost-effectiveness analysis in the past (they used to give probability distributions for how many animals were helped, for example). But it’s really difficult to do confidently for animal issues, and that’s part of the reason it’s only a portion of the whole picture (along with other factors like I mention above).
From this post, it seems like you’re trying to calculate historic cost-effectiveness and rate charities exclusively on that (since you haven’t published an evaluation of an animal charity yet I could be wrong here though)
This is not what we are trying to do. We simply critiqued the way that ACE calculated historic cost-effectiveness, and how ACE gave Legal Impact for Chickens a relatively high historic cost-effectiveness rating despite have no historic success.
My understanding of what ACE is trying to do with its evaluations as a whole is identify where marginal dollars might be most useful for animal advocacy, and move money from less effective opportunities to those.
ACE does 2 separate analyses for past cost-effectiveness, and room for future funding. For example, those two sections in ACE’s review of LIC are:
Cost Effectiveness: How much has Legal Impact for Chickens achieved through their programs?
Room For More Funding: How much additional money can Legal Impact for Chickens effectively use in the next two years?
Our review focuses on ACE’s Cost-Effectiveness analysis, not on their Room For More Funding analysis. In the future, we may evaluate ACE’s Room For More Funding Analysis, but that is not what our review focused on. We wanted to keep our review short enough that people could read it without a huge time investment, so we could not include an assessment of every single part of ACE’s evaluation process in our review.
It is also less reasonable to hold ACE accountable for their Room For More Funding analysis, since this is inherently more subjective and difficult to do. It is far easier for ACE (or any charity evaluator) to analyze historic cost-effectiveness than to analyze future cost-effectiveness. However, I would like to pose a question to you: Given the ACE often gives charities a worse historic cost-effectiveness rating for spending less money to achieve the exact same outcomes (see Problem 1), how confident do you feel in ACE’s ability to analyze future cost-effectiveness?
My understanding is ACE has tried to do something that’s just cost-effectiveness analysis in the past (they used to give probability distributions for how many animals were helped, for example).
ACE responded to this thread acknowledging that the problems listed in our review needed to be addressed, and that they changed their methodology (to a cost-effectiveness calculation of simply impact divided by cost) to do so:
This is not what we are trying to do. We simply critiqued the way that ACE calculated historic cost-effectiveness, and how ACE gave Legal Impact for Chickens a relatively high historic cost-effectiveness rating despite have no historic success.
FWIW this seems great—excited to see more comprehensive evaluations. Yeah, I agree with many of your comments here on the granular level — it seems you found something that is a potential issue for how ACE does (or did) some aspects of their evaluations, and publishing that is great! I think we just disagree on how important it is?
By the way, I’m ending further engagement on this (though feel free to leave a response if useful!) just because I already find the EA Forum distracting from other work, and don’t have time this week to think about this more. Appreciate you going through everything with me!
ACE states (under Criterion 2) that a charity’s Cost-Effectiveness Score “indicates, on a 1-7 scale, how cost effective we think the charity has been [...] with higher scores indicating higher cost effectiveness.”
Would you mind clarifying what you believe ACE’s goal is, and what you believe my goal is?
The analysis in my review is entirely about calculating historic cost-effectiveness. ACE’s Cost-Effectiveness Scores are also entirely about calculating historic cost-effectiveness.
From this post, it seems like you’re trying to calculate historic cost-effectiveness and rate charities exclusively on that (since you haven’t published an evaluation of an animal charity yet I could be wrong here though). My understanding of what ACE is trying to do with its evaluations as a whole is identify where marginal dollars might be most useful for animal advocacy, and move money from less effective opportunities to those. Cost-effectiveness might be one component of that, but is far from the only one (e.g. intervention scalability might matter, having a diversity of types of opportunities to appeal to different donors, etc.). It’s pretty easy to imagine scenarios where you wouldn’t prefer to only look at cost-effectiveness of individual charities when making recommendation, even if that’s what matters in the end. It’s also easy to imagine scenarios where recommending less effective opportunities leads to better outcomes to animals—maybe installing shrimp stunners is super effective, but only some donors will give to it. Maybe it can only scale to a few M per year but you influence more money than that. Depending on your circumstances, a lot more than cost-effectiveness of specific interventions matters for making the most effective recommendations.
My understanding is also that ACE doesn’t see EAs as its primary audience (but I’m less certain about this). This is a reason I’m excited about your project—seems nice to have “very EA” evaluations of charities in addition to ACE’s. But, I also imagine it would be hard to get charities to participate in your evaluation process if you don’t run the evaluations by them in advance, which could make it hard for you to get information to do what you’re trying to do, unless you rely on the information ACE collects, which then puts you in an awkward position of making a strong argument against an organization you might need to conduct evaluations.
My understanding is ACE has tried to do something that’s just cost-effectiveness analysis in the past (they used to give probability distributions for how many animals were helped, for example). But it’s really difficult to do confidently for animal issues, and that’s part of the reason it’s only a portion of the whole picture (along with other factors like I mention above).
Thank you for your response!
This is not what we are trying to do. We simply critiqued the way that ACE calculated historic cost-effectiveness, and how ACE gave Legal Impact for Chickens a relatively high historic cost-effectiveness rating despite have no historic success.
ACE does 2 separate analyses for past cost-effectiveness, and room for future funding. For example, those two sections in ACE’s review of LIC are:
Cost Effectiveness: How much has Legal Impact for Chickens achieved through their programs?
Room For More Funding: How much additional money can Legal Impact for Chickens effectively use in the next two years?
Our review focuses on ACE’s Cost-Effectiveness analysis, not on their Room For More Funding analysis. In the future, we may evaluate ACE’s Room For More Funding Analysis, but that is not what our review focused on. We wanted to keep our review short enough that people could read it without a huge time investment, so we could not include an assessment of every single part of ACE’s evaluation process in our review.
It is also less reasonable to hold ACE accountable for their Room For More Funding analysis, since this is inherently more subjective and difficult to do. It is far easier for ACE (or any charity evaluator) to analyze historic cost-effectiveness than to analyze future cost-effectiveness. However, I would like to pose a question to you: Given the ACE often gives charities a worse historic cost-effectiveness rating for spending less money to achieve the exact same outcomes (see Problem 1), how confident do you feel in ACE’s ability to analyze future cost-effectiveness?
ACE responded to this thread acknowledging that the problems listed in our review needed to be addressed, and that they changed their methodology (to a cost-effectiveness calculation of simply impact divided by cost) to do so:
FWIW this seems great—excited to see more comprehensive evaluations. Yeah, I agree with many of your comments here on the granular level — it seems you found something that is a potential issue for how ACE does (or did) some aspects of their evaluations, and publishing that is great! I think we just disagree on how important it is?
By the way, I’m ending further engagement on this (though feel free to leave a response if useful!) just because I already find the EA Forum distracting from other work, and don’t have time this week to think about this more. Appreciate you going through everything with me!
No problem. Thank you for your replies!