(1) Are non-reporters counted as giving $0? Yes â at least for recorded donations (i.e., the donations that are within our database). For example, in cell C41 of our working sheet, we provide the average recorded donations of a GWWC Pledger in 2022-USD ($4,132), and this average assumes non-reporters are giving $0. Similarly, in our âpledge statisticsâ sheet, which provides the average amount we record being given per Pledger per cohort, and by year, we also assumed non-reporters are giving $0.
(2) Does this mean we are underestimating the amount given by Pledgers? Only for recorded donations â we also tried to account for donations made but that are not in our records. We discuss this more here âbut in sum, for our best guess estimates, we estimated that our records only account for 79% of all pledge donations, and therefore we need to make an upwards adjustment of 1.27 to go from recorded donations to all donations made. We discuss how we arrived at this estimate pretty extensively in our appendix (with our methodology here being similar to how we analysed our counterfactual influence). For our conservative estimates, we did not make any recording adjustments, and we think this does underestimate the amount given by Pledgers.
(3) How did we handle nonresponse bias and could we handle it better? When estimating our counterfactual influence, we explicitly accounted for nonresponse bias. To do so, we treated respondents and nonrespondents separately, assuming a fraction of influence on nonrespondents compared to respondents for all surveys:
50% for our best-guess estimates.
25% for our conservative estimates.
We actually did consider adjusting this fraction depending on the survey we were looking at, and in our appendix we explain why we chose not to in each case. Could we handle this better? Definitely! I really appreciate your suggestions here â we explicitly outline handling nonresponse bias as one of the ways we would like to improve future evaluations.
(4) Could we incorporate population base rates of giving when considering our counterfactual influence? Iâd love to hear more about this suggestion, itâs not obvious to me how we could do this. For example, one interpretation here would be to look at how much Pledgers are giving compared to the population base rate. Presumably, weâd find they are giving more. But Iâm not sure how we could use that to inform our counterfactual influence, because there are at least two competing explanations for why they are giving more:
One explanation is that we are simply causing them to give more (so we should increase our estimated counterfactual influence).
Another is that we are just selecting for people who are already giving a lot more than the average population (in which case, we shouldnât increase our estimated counterfactual influence).
But perhaps Iâm missing the mark here, and this kind of reasoning/âanalysis is not really what you were thinking of. As I said, would love to hear more on this idea.
(Also, appreciate your kind words on the thoroughness/ârobustness)
Thanks for the clarifications, Michael, especially on non-reporters and non-response bias!
On base rates, my prior is that people who self select into GWWC pledges are naturally altruistic and so itâs right (as GWWC does) to use the more conservative estimateâbut against this is a concern that self-reported counterfactual donation isnât that accurate.
Itâs really great that GWWC noted the issue of social desirability bias, but I suspect it works to overestimate counterfactual giving tendencies (rather than overestimating GWWCâs impact), since the desire to look generous almost certainly outweighs the desire to please GWWC (see research on donor overreporting: https://ââresearchportal.bath.ac.uk/ââen/ââpublications/ââdealing-with-social-desirability-bias-an-application-to-charitabl). I donât have a good solution to this, insofar as standard list experiments arenât great for dealing with quantification as opposed to yes/ââno answersâwould be interested in hearing how your team plans to deal with this!
Hi Joel â great questions!
(1) Are non-reporters counted as giving $0?
Yes â at least for recorded donations (i.e., the donations that are within our database). For example, in cell C41 of our working sheet, we provide the average recorded donations of a GWWC Pledger in 2022-USD ($4,132), and this average assumes non-reporters are giving $0. Similarly, in our âpledge statisticsâ sheet, which provides the average amount we record being given per Pledger per cohort, and by year, we also assumed non-reporters are giving $0.
(2) Does this mean we are underestimating the amount given by Pledgers?
Only for recorded donations â we also tried to account for donations made but that are not in our records. We discuss this more here âbut in sum, for our best guess estimates, we estimated that our records only account for 79% of all pledge donations, and therefore we need to make an upwards adjustment of 1.27 to go from recorded donations to all donations made. We discuss how we arrived at this estimate pretty extensively in our appendix (with our methodology here being similar to how we analysed our counterfactual influence). For our conservative estimates, we did not make any recording adjustments, and we think this does underestimate the amount given by Pledgers.
(3) How did we handle nonresponse bias and could we handle it better?
When estimating our counterfactual influence, we explicitly accounted for nonresponse bias. To do so, we treated respondents and nonrespondents separately, assuming a fraction of influence on nonrespondents compared to respondents for all surveys:
50% for our best-guess estimates.
25% for our conservative estimates.
We actually did consider adjusting this fraction depending on the survey we were looking at, and in our appendix we explain why we chose not to in each case. Could we handle this better? Definitely! I really appreciate your suggestions here â we explicitly outline handling nonresponse bias as one of the ways we would like to improve future evaluations.
(4) Could we incorporate population base rates of giving when considering our counterfactual influence?
Iâd love to hear more about this suggestion, itâs not obvious to me how we could do this. For example, one interpretation here would be to look at how much Pledgers are giving compared to the population base rate. Presumably, weâd find they are giving more. But Iâm not sure how we could use that to inform our counterfactual influence, because there are at least two competing explanations for why they are giving more:
One explanation is that we are simply causing them to give more (so we should increase our estimated counterfactual influence).
Another is that we are just selecting for people who are already giving a lot more than the average population (in which case, we shouldnât increase our estimated counterfactual influence).
But perhaps Iâm missing the mark here, and this kind of reasoning/âanalysis is not really what you were thinking of. As I said, would love to hear more on this idea.
(Also, appreciate your kind words on the thoroughness/ârobustness)
Thanks for the clarifications, Michael, especially on non-reporters and non-response bias!
On base rates, my prior is that people who self select into GWWC pledges are naturally altruistic and so itâs right (as GWWC does) to use the more conservative estimateâbut against this is a concern that self-reported counterfactual donation isnât that accurate.
Itâs really great that GWWC noted the issue of social desirability bias, but I suspect it works to overestimate counterfactual giving tendencies (rather than overestimating GWWCâs impact), since the desire to look generous almost certainly outweighs the desire to please GWWC (see research on donor overreporting: https://ââresearchportal.bath.ac.uk/ââen/ââpublications/ââdealing-with-social-desirability-bias-an-application-to-charitabl). I donât have a good solution to this, insofar as standard list experiments arenât great for dealing with quantification as opposed to yes/ââno answersâwould be interested in hearing how your team plans to deal with this!