Agree that the cohort data looks helpful for the reasons you mention (presuming a careful look at any privacy issues with releasing it checked out). I’ll respond to a couple of the points:
(Does GWWC attempt to verify donations? )
I don’t think so, and guess this is probably the right move despite the costs of lacking verification (aside from the obvious costs of data gathering, I’m worried about the subtle costs of making giving for the pledge feel more like an obligation than an opportunity, and this turning people off and leading to fewer donations, and fewer people encouraging others to join).
It would be nice to be able to reconcile that very low drop-out rates with the very high ones GWWC published a few years ago here, which showed only 65-70% claimed retention after 0-2 years. Right now it seems hard to understand how these are consistent.
I think what’s going on here is that the 65-70% figures were mostly response rates on a survey of members. More recently GWWC has gone to rather more trouble to follow up with members who don’t initially answer the survey, in order to get better data (as it was unclear how many of the 30-35% non-respondents were still donating).
2-year grace period is very long. In finance we call a loan non-performing if they are 90-days behind! Worse, GWWC is young and has seen exponential growth, so a 2-year wait period means no data on anyone who joined since June 2013.
I see the sense in which it feels long. On the other hand, as the frequency of asking for responses from members is annual, this amounts to “missed two cycles”. Counting it as dropping out with one missed cycle seems like it might catch too many people. This one seems pretty thorny to me, since more frequent data would be quite useful for analytics, but on the other hand impose rather larger burdens on both GWWC staff and members. And to the extent that what we care about is long-term donation behaviour, we may just need to wait for data.
Disclaimers: I work for the Global Priorities Project, part of the same umbrella organisation (CEA) as GWWC, and I was acting director of research at GWWC during 2014. I don’t know all the operational details, though.
Thanks for your interest, Dale, and Owen for responding so thoroughly.
I overall agree that there is a lot of information it would be nice to get at, although the numbers are somewhat small if we’re trying to work out the difference between, say, the 2009 cohort and the others (since there were only around 30 members in 2009). Now that I can spend less time on fundraising, I’ll try to put together a post about this.
Just to add a couple of points to what Owen has said: On the question of students to earning—that was definitely a time we were worried people would drop out. The data doesn’t seem to suggest that’s the case though—so far it seems that people do tend to actually start giving 10% when they start earning.
On the verifying donations—we have in the past compared AMF’s data on donors with member self-reporting. While the self-reporting was far from perfect, people were at least as often under-reporting as over-reporting. (And most often discrepancies simply turned out to be mistakes about when the person had donated.) For the reasons Owen mentions, we did this once (to get a sense of whether reporting was approx accurate) but we aren’t planning to do it again.
On the 2009 cohort—would it make sense to bucket this with the 2010 cohort? (So treating the first 14 months of GWWC as one cohort, and in years thereafter)
Agree that the cohort data looks helpful for the reasons you mention (presuming a careful look at any privacy issues with releasing it checked out). I’ll respond to a couple of the points:
I don’t think so, and guess this is probably the right move despite the costs of lacking verification (aside from the obvious costs of data gathering, I’m worried about the subtle costs of making giving for the pledge feel more like an obligation than an opportunity, and this turning people off and leading to fewer donations, and fewer people encouraging others to join).
I think what’s going on here is that the 65-70% figures were mostly response rates on a survey of members. More recently GWWC has gone to rather more trouble to follow up with members who don’t initially answer the survey, in order to get better data (as it was unclear how many of the 30-35% non-respondents were still donating).
I see the sense in which it feels long. On the other hand, as the frequency of asking for responses from members is annual, this amounts to “missed two cycles”. Counting it as dropping out with one missed cycle seems like it might catch too many people. This one seems pretty thorny to me, since more frequent data would be quite useful for analytics, but on the other hand impose rather larger burdens on both GWWC staff and members. And to the extent that what we care about is long-term donation behaviour, we may just need to wait for data.
Disclaimers: I work for the Global Priorities Project, part of the same umbrella organisation (CEA) as GWWC, and I was acting director of research at GWWC during 2014. I don’t know all the operational details, though.
Thanks for your interest, Dale, and Owen for responding so thoroughly. I overall agree that there is a lot of information it would be nice to get at, although the numbers are somewhat small if we’re trying to work out the difference between, say, the 2009 cohort and the others (since there were only around 30 members in 2009). Now that I can spend less time on fundraising, I’ll try to put together a post about this. Just to add a couple of points to what Owen has said: On the question of students to earning—that was definitely a time we were worried people would drop out. The data doesn’t seem to suggest that’s the case though—so far it seems that people do tend to actually start giving 10% when they start earning. On the verifying donations—we have in the past compared AMF’s data on donors with member self-reporting. While the self-reporting was far from perfect, people were at least as often under-reporting as over-reporting. (And most often discrepancies simply turned out to be mistakes about when the person had donated.) For the reasons Owen mentions, we did this once (to get a sense of whether reporting was approx accurate) but we aren’t planning to do it again.
On the 2009 cohort—would it make sense to bucket this with the 2010 cohort? (So treating the first 14 months of GWWC as one cohort, and in years thereafter)