Thanks for the feedback Laura, I think the point about ceiling effects is really interesting. If we care about increasing the mean participation then that shouldn’t affect the conclusions (since it would be useless for people already at the ceiling), but if (as you suggest) the value is mostly coming from a handful of people maintaining/growing their engagement and networks then our method wouldn’t detect that. Detecting effects like that is hard and while it’s good practice to be skeptical of unobservable explanations, it doesn’t seem that implausible.
Perhaps trying to systematically look at the histories of people who are working in high-impact jobs and joined EA after ~2015 and tracing through interviews with them and their friends whether we think they’d have ended up somewhere equally impactful if not for attending EAGs. But that would necessarily involve huge assumptions about how impactful EAGs are already, so may not add much information.
I agree that randomizing almost-accepted people would be statistically great but not informative about the impacts of non-marginal people, and randomly excluding highly-qualified people would be too costly in my opinion. We specifically reached out to people who were accepted but didn’t attend for various reasons (which should be a good comparison point) but there’s nowhere near enough of them for EAGxAus to get statistical results. If this was done for all EAG(x)‘s for a few years we might actually get a great control group though!
We did consider having more questions and aiming more directly at the factors that are most indicative of direct impact but we decided on this compromise for two reasons: First, every extra question reduces the response rate. Given the 40% drop out and a small sample size I’d be reluctant to add too much. Second, questions that take time and through for people to answer is especially likely to lead to drop outs and inaccurate responses.
That said, leaving a text box for ‘what EA connections and opportunities have you found in the last 6 months?’ could be very powerful, though quanitifying the results would of require a lot of interpretation.
and randomly excluding highly-qualified people would be too costly in my opinion
I feel like if it would give high quality answers about how valuable such events are, it would be well worth the cost of random exclusion.
But this one feels more like “one to consider doing when you’re otherwise quite happy with the study design”, or something? And willing to invest more in follow-up or incentives to reduce drop-out rates.
Thanks for the response, I really like hearing about other people’s reasoning re: study design! I agree that randomly excluding highly qualified people would be too costly, and I think your idea of building a control group from accepted-cancelled EAGx attendees across multiple conferences is a great idea. I guess my only issue with it is that these people are likely still experiencing the FOMO (they wanted to go but couldn’t). If we are considering a counterfactual scenario where the resources currently used to organise EAGx conferences are spent on something else, there’s no conference to miss out on, so it removes a layer of experience related to ‘damn, I wish I could have gone to that’.
I’m not familiar enough with survey design to comment on the risk of adding more questions reducing the response rate. If you think it would be a big issue, that’s good enough for me—and also I imagine it would further skew the survey respondents towards more-engaged rather than less-engaged people. I do think that for the purpose of this survey, it would make more sense to prompt the EAGx attendees to answer whether they had followed up on any connections / ideas / opportunities from EAGx in the last 6 months. I’m not sure how to word that so that the same survey/questions could be used for both groups though.
Thanks for the feedback Laura, I think the point about ceiling effects is really interesting. If we care about increasing the mean participation then that shouldn’t affect the conclusions (since it would be useless for people already at the ceiling), but if (as you suggest) the value is mostly coming from a handful of people maintaining/growing their engagement and networks then our method wouldn’t detect that. Detecting effects like that is hard and while it’s good practice to be skeptical of unobservable explanations, it doesn’t seem that implausible.
Perhaps trying to systematically look at the histories of people who are working in high-impact jobs and joined EA after ~2015 and tracing through interviews with them and their friends whether we think they’d have ended up somewhere equally impactful if not for attending EAGs. But that would necessarily involve huge assumptions about how impactful EAGs are already, so may not add much information.
I agree that randomizing almost-accepted people would be statistically great but not informative about the impacts of non-marginal people, and randomly excluding highly-qualified people would be too costly in my opinion. We specifically reached out to people who were accepted but didn’t attend for various reasons (which should be a good comparison point) but there’s nowhere near enough of them for EAGxAus to get statistical results. If this was done for all EAG(x)‘s for a few years we might actually get a great control group though!
We did consider having more questions and aiming more directly at the factors that are most indicative of direct impact but we decided on this compromise for two reasons: First, every extra question reduces the response rate. Given the 40% drop out and a small sample size I’d be reluctant to add too much. Second, questions that take time and through for people to answer is especially likely to lead to drop outs and inaccurate responses.
That said, leaving a text box for ‘what EA connections and opportunities have you found in the last 6 months?’ could be very powerful, though quanitifying the results would of require a lot of interpretation.
I feel like if it would give high quality answers about how valuable such events are, it would be well worth the cost of random exclusion.
But this one feels more like “one to consider doing when you’re otherwise quite happy with the study design”, or something? And willing to invest more in follow-up or incentives to reduce drop-out rates.
Thanks for the response, I really like hearing about other people’s reasoning re: study design! I agree that randomly excluding highly qualified people would be too costly, and I think your idea of building a control group from accepted-cancelled EAGx attendees across multiple conferences is a great idea. I guess my only issue with it is that these people are likely still experiencing the FOMO (they wanted to go but couldn’t). If we are considering a counterfactual scenario where the resources currently used to organise EAGx conferences are spent on something else, there’s no conference to miss out on, so it removes a layer of experience related to ‘damn, I wish I could have gone to that’.
I’m not familiar enough with survey design to comment on the risk of adding more questions reducing the response rate. If you think it would be a big issue, that’s good enough for me—and also I imagine it would further skew the survey respondents towards more-engaged rather than less-engaged people. I do think that for the purpose of this survey, it would make more sense to prompt the EAGx attendees to answer whether they had followed up on any connections / ideas / opportunities from EAGx in the last 6 months. I’m not sure how to word that so that the same survey/questions could be used for both groups though.