Thanks for this reply! I don’t have time to engage in much more detail, but I’m now a little more uncertain that my specific qualms with indirect impact are important to the project.
I don’t want to make you dig through your notes just to answer my question; I more intended to make the general point that I’d have liked to have a few more concrete facts that I could use to help me weigh Rethink’s judgment. (For example, if you shared some current numbers on corporate giving, I could assign my own ‘max scale’ parameter and check my intuition against yours.)
Knowing that Donational started out with all or almost all TLYCS charities reduces my concern a lot. The impression I had was that they’d been working with a very broad range of charities and were radically cutting back on their selection.
>I more intended to make the general point that I’d have liked to have a few more concrete facts that I could use to help me weigh Rethink’s judgment.
That’s fair. Initially I was going to write a summary of our evidence and reasoning for all 42 parameters, or at least the 5-10 that the results were most sensitive to. In the end we decided against it for various reasons, e.g.: - Some were based fairly heavily on information that had to remain confidential, so a lot would have to be redacted. - Often the 6 team members had different rationales and drew on different information/experiences, so it would be hard in some cases to give a coherent summary. - Sometimes team members noted their rationales in the elicitation document, but with so many parameters, there wasn’t always time to do this properly. Any summary would therefore also be incomplete. - The report was already too long and was taking too much time, so this seemed like an easy way of limiting both length and delays.
But maybe it was the wrong call.
>Knowing that Donational started out with all or almost all TLYCS charities reduces my concern a lot. The impression I had was that they’d been working with a very broad range of charities and were radically cutting back on their selection.
I would consider TLYCS’s range very broad, but you may disagree. Anyway, you can see Donational’s current list at https://donational.org/charities
I would consider TLYCS’s range very broad, but you may disagree.
TLYCS only endorses 22 charities, all of which work in the developing world on causes that are plausibly cost-effective on the level of some GiveWell interventions (even though evidence is fairly weak on some of them—I recall GiveWell being more down on Zusha after their last review). This selection only looks broad if your point of comparison is another EA-aligned evaluator like GiveWell, ACE, or Founder’s Pledge.
Meanwhile, many charitable giving platforms/evaluators support/endorse a much wider range of nonprofits, most of them based in rich countries. Even looking only at Charity Navigator’s perfect scores, you see 60 charities (only 1⁄4 of which are “international”) -- and Charity Navigator’s website includes hundreds of other favorable charity profiles. Another example: When I worked at Epic, employees could support more than 100 different charities with the company’s money during the annual winter giving drive.
I also imagine that many corporate giving platforms would try to emphasize their vast selection/”the huge number of charities that have partnered with us”—I’m impressed that Donational was selective from the beginning.
>TLYCS only endorses 22 charities, all of which work in the developing world on causes that are plausibly cost-effective on the level of some GiveWell interventions (even though evidence is fairly weak on some of them...)
It’s plausible that some of these are as cost-effective as the GW top charities, but perhaps not that they are as cost-effective on average, or in expectation.
>This selection only looks narrow if your point of comparison is another EA-aligned evaluator like GiveWell, ACE, or Founder’s Pledge.
You mean only looks broad?
Anyway, I would agree TLYCS’s selection is narrow relative to some others; just not the EA evaluators that seem like the most natural comparators.
It’s plausible that some of these are as cost-effective as the GW top charities, but perhaps not that they are as cost-effective on average, or in expectation.
I agree, for most values of “plausible”. Otherwise, it would imply TLYCS is catching many GiveWell-tier charities GiveWell either missed or turned down, which is unlikely given their much smaller research capacity. But all TLYCS charities are in the category “things I could imagine turning out to be worthy of support from donors in EA with particular values, if more evidence arose” (which wouldn’t be the case for, say, an art museum).
Thanks for this reply! I don’t have time to engage in much more detail, but I’m now a little more uncertain that my specific qualms with indirect impact are important to the project.
I don’t want to make you dig through your notes just to answer my question; I more intended to make the general point that I’d have liked to have a few more concrete facts that I could use to help me weigh Rethink’s judgment. (For example, if you shared some current numbers on corporate giving, I could assign my own ‘max scale’ parameter and check my intuition against yours.)
Knowing that Donational started out with all or almost all TLYCS charities reduces my concern a lot. The impression I had was that they’d been working with a very broad range of charities and were radically cutting back on their selection.
>I more intended to make the general point that I’d have liked to have a few more concrete facts that I could use to help me weigh Rethink’s judgment.
That’s fair. Initially I was going to write a summary of our evidence and reasoning for all 42 parameters, or at least the 5-10 that the results were most sensitive to. In the end we decided against it for various reasons, e.g.:
- Some were based fairly heavily on information that had to remain confidential, so a lot would have to be redacted.
- Often the 6 team members had different rationales and drew on different information/experiences, so it would be hard in some cases to give a coherent summary.
- Sometimes team members noted their rationales in the elicitation document, but with so many parameters, there wasn’t always time to do this properly. Any summary would therefore also be incomplete.
- The report was already too long and was taking too much time, so this seemed like an easy way of limiting both length and delays.
But maybe it was the wrong call.
>Knowing that Donational started out with all or almost all TLYCS charities reduces my concern a lot. The impression I had was that they’d been working with a very broad range of charities and were radically cutting back on their selection.
I would consider TLYCS’s range very broad, but you may disagree. Anyway, you can see Donational’s current list at https://donational.org/charities
TLYCS only endorses 22 charities, all of which work in the developing world on causes that are plausibly cost-effective on the level of some GiveWell interventions (even though evidence is fairly weak on some of them—I recall GiveWell being more down on Zusha after their last review). This selection only looks broad if your point of comparison is another EA-aligned evaluator like GiveWell, ACE, or Founder’s Pledge.
Meanwhile, many charitable giving platforms/evaluators support/endorse a much wider range of nonprofits, most of them based in rich countries. Even looking only at Charity Navigator’s perfect scores, you see 60 charities (only 1⁄4 of which are “international”) -- and Charity Navigator’s website includes hundreds of other favorable charity profiles. Another example: When I worked at Epic, employees could support more than 100 different charities with the company’s money during the annual winter giving drive.
I also imagine that many corporate giving platforms would try to emphasize their vast selection/”the huge number of charities that have partnered with us”—I’m impressed that Donational was selective from the beginning.
>TLYCS only endorses 22 charities, all of which work in the developing world on causes that are plausibly cost-effective on the level of some GiveWell interventions (even though evidence is fairly weak on some of them...)
It’s plausible that some of these are as cost-effective as the GW top charities, but perhaps not that they are as cost-effective on average, or in expectation.
>This selection only looks narrow if your point of comparison is another EA-aligned evaluator like GiveWell, ACE, or Founder’s Pledge.
You mean only looks broad?
Anyway, I would agree TLYCS’s selection is narrow relative to some others; just not the EA evaluators that seem like the most natural comparators.
I agree, for most values of “plausible”. Otherwise, it would imply TLYCS is catching many GiveWell-tier charities GiveWell either missed or turned down, which is unlikely given their much smaller research capacity. But all TLYCS charities are in the category “things I could imagine turning out to be worthy of support from donors in EA with particular values, if more evidence arose” (which wouldn’t be the case for, say, an art museum).