Thanks James and Miles, I appreciate your summary at the start of this post. Good to read this and important work to attempt to validate the core aims of the conference. Couple of ideas for survey tweaks:
Push to get higher response rate for control group, too. This should give a more representative sample of non-attendees. A 5% response rate is probably the self-selected “keen beans” who may be more likely to stay the course in their views and actions (rather than decay over 6 months), and who may just be too busy for that particular year’s EAGx.
Add (or adjust) a survey step much later than 6 months. In my view, 6 months is too short to lose social connections for the control group. Such connections are lost over longer periods (1.5+ years), and especially less likely to be lost by the “keen beans” who respond to the survey. These may be more likely to be conscientious or proactive respondents who stay in touch with connections more often.
Thanks Rudstead, I agree about the “keen beans” limitation, though if anything that makes them more similar to EAGx attendees (which they’re supposed to be a comparison to). In surveys in general there’s also steeply diminishing returns for getting a higher response rate with more reminders or higher cash incentives.
(2) Agreed, but hopefully we’ll be able to continue following people up over time. The main limitation is that loads of people in any cohort study are going to drop out over time, but if it succeeded such a cohort study could provide loads of information.
I like these ideas but have something to add re your ‘keen beans’, or rather, their opposite—at what point is someone insufficiently engaged with EA to bother considering them when assessing the effectiveness of interventions? If someone signs up to an EA mailing list and then lets all the emails go to their junk folder without ever reading them or considering the ideas again, is that person actually part of the target group for the intervention? They are part of our statistics (as in, they count towards the 95% of ‘people on the EA mailing list’ who did not respond to the survey), is that a good thing or a bad thing?
Thanks James and Miles, I appreciate your summary at the start of this post. Good to read this and important work to attempt to validate the core aims of the conference. Couple of ideas for survey tweaks:
Push to get higher response rate for control group, too. This should give a more representative sample of non-attendees. A 5% response rate is probably the self-selected “keen beans” who may be more likely to stay the course in their views and actions (rather than decay over 6 months), and who may just be too busy for that particular year’s EAGx.
Add (or adjust) a survey step much later than 6 months. In my view, 6 months is too short to lose social connections for the control group. Such connections are lost over longer periods (1.5+ years), and especially less likely to be lost by the “keen beans” who respond to the survey. These may be more likely to be conscientious or proactive respondents who stay in touch with connections more often.
Thanks Rudstead, I agree about the “keen beans” limitation, though if anything that makes them more similar to EAGx attendees (which they’re supposed to be a comparison to). In surveys in general there’s also steeply diminishing returns for getting a higher response rate with more reminders or higher cash incentives.
(2) Agreed, but hopefully we’ll be able to continue following people up over time. The main limitation is that loads of people in any cohort study are going to drop out over time, but if it succeeded such a cohort study could provide loads of information.
I like these ideas but have something to add re your ‘keen beans’, or rather, their opposite—at what point is someone insufficiently engaged with EA to bother considering them when assessing the effectiveness of interventions? If someone signs up to an EA mailing list and then lets all the emails go to their junk folder without ever reading them or considering the ideas again, is that person actually part of the target group for the intervention? They are part of our statistics (as in, they count towards the 95% of ‘people on the EA mailing list’ who did not respond to the survey), is that a good thing or a bad thing?