This is more accurate than email tracking in that it captures more people (such as those who didn’t give an email or those who changed emails), but less accurate in that it is possible that people who state they joined EA earlier could still show up just on later surveys and offset people who dropped off, making the retention rate appear higher than it actually is.
Why should the possibility of early EAs failing to take early surveys inflate the retention rate more than the possibility of early EAs failing to take later surveys deflate it? Shouldn’t we expect these two effects to roughly cancel each other out? If anything, I would expect EAs in a given cohort to be slightly less willing to participate in the EA survey with each successive year, since completing the survey becomes arguably more tedious the more you do it. If so, this methodology should slightly underestimate, rather than overestimate, the true retention rate. Apologies if I’m misunderstanding the reasoning here.
Yeah, I suppose you’re right. I guess the point is that the offset effect still makes it hard to estimate the true retention effect… not to mention any other differential non-response in survey taking.
Why should the possibility of early EAs failing to take early surveys inflate the retention rate more than the possibility of early EAs failing to take later surveys deflate it? Shouldn’t we expect these two effects to roughly cancel each other out? If anything, I would expect EAs in a given cohort to be slightly less willing to participate in the EA survey with each successive year, since completing the survey becomes arguably more tedious the more you do it. If so, this methodology should slightly underestimate, rather than overestimate, the true retention rate. Apologies if I’m misunderstanding the reasoning here.
Yeah, I suppose you’re right. I guess the point is that the offset effect still makes it hard to estimate the true retention effect… not to mention any other differential non-response in survey taking.