Although the specific details of staff’s backup plans are confidential, the co-founder estimates of the total counterfactual impact of Charity Science employees/volunteers are close to ~$500k at the upper bound for all current combined staff/volunteer counterfactuals over the last 2.5 years. This number is extremely soft and based on limited evidence. Using this estimate one could up the counterfactuals included costs of Charity Science to $580k
It looks like CS turned a modest counterfactual ‘profit’ from a fundraising point of view.
Combined with the potential gains from some breakthrough in fundraising techniques that could be scaled up, I reckon CS was a better target for EA donations than e.g. AMF. Though perhaps not many times better.
It all depends what numbers you use, if you take the more speculative numbers for our counterfactual cost but the harder numbers for our money moved than you can get a negative ratio. Although this way of breaking it down is unintuitive to me. As I mentioned depending on what numbers you chose to use our ratios can change “between 1:11 Charity Science lifetime returns to 1:0.5”.
Although this way of breaking it down is unintuitive to me.
I disagree with this actually.
If you’re relatively skeptical, then you should include all the “soft” estimates of costs, but only include the “hard” estimates of benefits. That’s the most skeptical treatment.
You seem to be saying that you should only compare hard numbers to hard numbers, or soft numbers to soft numbers. Only including hard estimates of costs underestimates your costs, which is exactly what you want to avoid if you’re trying to make a solid estimate of cost-effectiveness.
Concrete example: Bill Gates goes to volunteer at a soup kitchen. The hard estimate of costs is zero, because Gates wasn’t paid anything. There’s a small hard benefit though, so if you only compare hard to hard, this looks like a good thing to do.
But that’s wrong. There’s a huge “soft” cost of Gates working at the kitchen—the opportunity cost of his time which could be used doing more research on where the foundation spends its money or convincing another billionaire to take the pledge.
Interestingly, if you do the same pessimistic calculation for GWWC, you’ll still get a ratio of something like 6:1 or 4:1.
I don’t think GWWC’s staff opportunity costs are more than 50% of their financial costs, and very unlikely more than 100%, at least if you measure them in the same way: money the staff would have donated otherwise if they’d not worked at GWWC.
Or if you apply a harsher counterfactual adjustment to GWWC, you might drop to 3:1 or 2:1. But I think it’s pretty hard to go negative. (And that’s ignoring the future value of pledges, which seems very pessimistic, given that it’s a lifetime public pledge).
That’s covered here:
Thanks. Given that, hasn’t Charity Science actively cost effective charities money?
It looks like CS turned a modest counterfactual ‘profit’ from a fundraising point of view.
Combined with the potential gains from some breakthrough in fundraising techniques that could be scaled up, I reckon CS was a better target for EA donations than e.g. AMF. Though perhaps not many times better.
It all depends what numbers you use, if you take the more speculative numbers for our counterfactual cost but the harder numbers for our money moved than you can get a negative ratio. Although this way of breaking it down is unintuitive to me. As I mentioned depending on what numbers you chose to use our ratios can change “between 1:11 Charity Science lifetime returns to 1:0.5”.
I disagree with this actually.
If you’re relatively skeptical, then you should include all the “soft” estimates of costs, but only include the “hard” estimates of benefits. That’s the most skeptical treatment.
You seem to be saying that you should only compare hard numbers to hard numbers, or soft numbers to soft numbers. Only including hard estimates of costs underestimates your costs, which is exactly what you want to avoid if you’re trying to make a solid estimate of cost-effectiveness.
Concrete example: Bill Gates goes to volunteer at a soup kitchen. The hard estimate of costs is zero, because Gates wasn’t paid anything. There’s a small hard benefit though, so if you only compare hard to hard, this looks like a good thing to do. But that’s wrong. There’s a huge “soft” cost of Gates working at the kitchen—the opportunity cost of his time which could be used doing more research on where the foundation spends its money or convincing another billionaire to take the pledge.
Interestingly, if you do the same pessimistic calculation for GWWC, you’ll still get a ratio of something like 6:1 or 4:1.
I don’t think GWWC’s staff opportunity costs are more than 50% of their financial costs, and very unlikely more than 100%, at least if you measure them in the same way: money the staff would have donated otherwise if they’d not worked at GWWC.
Or if you apply a harsher counterfactual adjustment to GWWC, you might drop to 3:1 or 2:1. But I think it’s pretty hard to go negative. (And that’s ignoring the future value of pledges, which seems very pessimistic, given that it’s a lifetime public pledge).