Thank you for the response, and I’m glad that it’s being improved, and that there seems to be a honest interest in doing better.
I feel “ensure others don’t get the wrong idea about how seriously such estimates should be taken” is understating things- it should be reasonable for people to ascribe some non-zero level of meaning to issued estimates, and especially it should be that using them to compare between charities doesn’t lead you massively astray. If it’s “the wrong idea” to look at an estimate at all, because it isn’t the true best reasoned expectation of results the evaluator has, I think the error was in the estimate rather than in expectation management, and find the deflection of responsibility here to the people who took ACE at all seriously concerning.
The solution here shouldn’t be for people to trust things others say less in general.
Cost-effectiveness is important- it is the measure of where putting your money does the most good and how much good you can expect to do, and a fully inclusive of risks and data issues cost effectiveness estimate is basically what one is arriving at when one determines what is effective. Even if you use other selection strategies for top charities, incorrect cost effectiveness estimates are not good.
I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don’t want others to “get the wrong idea”, I’m not claiming that the readers were at fault. I’m claiming that the ACE communications staff was at fault.
Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.
Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.
Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn’t a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.
You said “I think the error was in the estimate rather than in expectation management” because you felt the estimate itself wasn’t good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it’s just that the way we were talking about these calculations was not.
Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.
As you’ve said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we’re taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we’re currently thinking.
Fwiw, I’ve been following ACE closely the past years, and always felt like I was the one taking cost-effectiveness estimates too literally, and ACE was time after time continually and tirelessly imploring me not to.
Thank you for the response, and I’m glad that it’s being improved, and that there seems to be a honest interest in doing better.
I feel “ensure others don’t get the wrong idea about how seriously such estimates should be taken” is understating things- it should be reasonable for people to ascribe some non-zero level of meaning to issued estimates, and especially it should be that using them to compare between charities doesn’t lead you massively astray. If it’s “the wrong idea” to look at an estimate at all, because it isn’t the true best reasoned expectation of results the evaluator has, I think the error was in the estimate rather than in expectation management, and find the deflection of responsibility here to the people who took ACE at all seriously concerning.
The solution here shouldn’t be for people to trust things others say less in general.
Compare, say, GiveWell’s analysis of LLINs (http://www.givewell.org/international/technical/programs/insecticide-treated-nets#HowcosteffectiveisLLINdistribution); it’s very rough and the numbers shouldn’t be assumed to be close to right (and responsibly, they describe all this), but their methodology makes them viable for comparison purposes.
Cost-effectiveness is important- it is the measure of where putting your money does the most good and how much good you can expect to do, and a fully inclusive of risks and data issues cost effectiveness estimate is basically what one is arriving at when one determines what is effective. Even if you use other selection strategies for top charities, incorrect cost effectiveness estimates are not good.
I agree: it is indeed reasonable for people to have read our estimates the way they did. But when I said that we don’t want others to “get the wrong idea”, I’m not claiming that the readers were at fault. I’m claiming that the ACE communications staff was at fault.
Internally, the ACE research team was fairly clear about what we thought about leafleting in 2014. But the communications staff (and, in particular, I) failed to adequately get across these concerns at the time.
Later, in 2015 and 2016, I feel that whenever an issue like leafleting came up publicly, ACE was good about clearly expressing our reservations. But we neglected to update the older 2014 page with the same kind of language that we now use when talking about these things. We are now doing what we can to remedy this, first by including a disclaimer at the top of the older leafleting pages, and second by planning a full update of the leafleting intervention page in the near future.
Per your concern about cost-effectiveness estimates, I do want to say that our research team will be making such calculations public on our Guesstimate page as time permits. But for the time being, we had to take down our internal impact calculator because the way that we used it internally did not match the ways others (like Slate Star Codex) were using it. We were trying to err on the side of openness by keeping it public for as long as we did, but in retrospect there just wasn’t a good way for others to use the tool in the way we used it internally. Thankfully, the Guesstimate platform includes upper and lower bounds directly in the presented data, so we feel it will be much more appropriate for us to share with the public.
You said “I think the error was in the estimate rather than in expectation management” because you felt the estimate itself wasn’t good; but I hope this makes it more clear that we feel that the way we were internally using upper and lower bounds was good; it’s just that the way we were talking about these calculations was not.
Internally, when we look at and compare animal charities, we continue to use cost effectiveness estimates as detailed on our evaluation criteria page. We intend to publicly display these kinds of calculations on Guesstimate in the future.
As you’ve said, the lesson should not be for people to trust things others say less in general. I completely agree with this sentiment. Instead, when it comes to us, the lessons we’re taking are: (1) communications staff needs to better explain our current stance on existing pages, (2) comm staff should better understand that readers may draw conclusions solely from older pages, without reading our more current thinking on more recently published pages, and (3) research staff should be more discriminating on what types of internal tools are appropriate for public use. There may also be further lessons that can be learned from this as ACE staff continues to discuss these issues internally. But, for now, this is what we’re currently thinking.
Fwiw, I’ve been following ACE closely the past years, and always felt like I was the one taking cost-effectiveness estimates too literally, and ACE was time after time continually and tirelessly imploring me not to.
This all makes sense, and I think it is a a very reasonable perspective. I hope this ongoing process goes well.