I don’t understand this methodology. Shouldn’t we expect most attendees of EAGx events to be roughly at equilibrium with their EA involvement, and even if they are not, shouldn’t we expect the marginal variance explained by EAGx attendance to be very small?
I feel like in order for this study to have returned anything but a null-result, the people who attend EAGx events would have had to become a lot more involved over time. But, on average, in an equilibrium state, we should expect EAGx attendees to be as likely to be increasing their involvement in EA as they are to be decreasing their involvement in EA (otherwise we wouldn’t be at equlibrium and we should expect EAGx attendance to increase each year).
Of course, de-facto EA is not in equilibrium, but I don’t see how this study would be measuring anything but the overall growth rate of EA (which we know from other data is currently relatively low and probably slightly negative). If EA is shrinking overall, then we should expect members of EA (i.e. people who attend EAGx events) to be taking fewer EA-coded actions over time. If EA is growing overall then we should expect members of EA to continue to get more involved.
Whether EA is shrinking or growing is of course an interesting question to ask, but this also feels like a very indirect and noisy way of measuring the EA growth rate.
Thanks for the comment, this is a really strong point.
I think this can make us reasonably confident that the EAGx didn’t make people more engaged on average and even though you already expected this, I think a lot of people did expect EAGs would lead to actively higher engagement among participants. We weren’t trying to measure the EA growth rate of course, we were trying to measure whether the EAGs lead to higher counterfactual engagement among attendees.
The model where an EAG matters could look something like: There are two separate populations of EA: less-engaged members who don’t attend EAGs, and more-engaged members who attend EAGs at least sometimes. And attending an EAG helps push people into being even more engaged and maintains their level of engagement that would otherwise flag. So even if both populations are stable, EAG keeps the high-engagement population more engaged and/or larger.
A similar model where EAG doesn’t matter is that people stay engaged for other reasons and people attend EAG believing incorrectly it will help or as de-facto recreation.
If the first model is true then we should expect EA engagement to be a lot higher in the few months after the conference and gradually fall until at least the few weeks before the conference (and spiking again during/just after the conference). But if the second model is true then any effects on EA engagement from the conference should dissapear quickly, perhaps within a few weeks or even days.
While the survey isn’t perfect for measuring this (6 months is a lot of time for the effects to decay and it would be better for the initial survey would’ve been better weeks before the conference might have been getting people excited) I think it provides significant value since it asks about behavior over the past 6 months in total. You’d expect if the conference had a big effect on maintaining motivation (which averages steady-state across years) that people would donate more, have more connections, attend more events etc. 0-5 months after a conference than 6-12 months after.
Given we don’t see that, it seems harder to argue that EAGs have a big effect on motivation and therefore harder to argue that EAGs play an important role in maintaining the current steady-state motivation and energy of attendees.
It could still be that EAGs matter for other reasons (e.g. a few people get connections that create amazing value) but this seems to provide significant evidence against one major supposed channel of impact.
I agree that it provides a bit of evidence on whether EAGs have a very spiky effect, but I had a relatively low prior on that. My sense is the kind of people who attend EAGs already do a bunch of stuff that connects them to the EA community, and while I do expect a mild peak in the weeks afterwards, 6 months is really far beyond where I expect any measurable effect.
but this seems to provide significant evidence against one major supposed channel of impact.
I don’t understand this. Why would it be important for EAGs impact to have a spiky intervention profile? If EAGs and EAGx events are useful in a way where the benefit is spread out so that they form a relatively stable baseline for EA involvement (which honestly seems almost inevitable given the kind of behavior that you were measuring for EA involvement), then we would measure no effect with your methodology.
6 months is really far beyond where I expect any measurable effect
I agree there wouldn’t be new effects at that point, but we’re asking about total effects over the 6 months before/since the conference. If the connections etc. persist for 6 months then it should show up in the survey and if they have dissapeared within a few months then that indicates these effects of EAGx attendance are short-lived, which presumably makes them far less significant for a person’s EA engagement and impact overall.
Why would it be important for EAGs impact to have a spiky intervention profile?
If the EAG impacts are spiky enough that they start disspiating substantially within several months (but get re-upped but future attendance) then we should be able to detect a change with our methodology (higher engagement after). You’re right that if the effects persist for many years (and don’t stack much with repeat attendance) then we wouldn’t be able to measure any effect on repeat attendees but this would presume that it isn’t having much impact on repeat attendees anyway. On the other hand, if effects persist for many years then we should be able to detect a strong effect for first-time attendees (though you’d need a bigger sample).
I think this can make us reasonably confident that the EAGx didn’t make people more engaged on average and even though you already expected this,
To be more explicit here. We absolutely did not learn whether EAGx events made people more engaged on average. Because overall EA membership behavior is not increasing/EA is not growing it is necessarily the case that the average EAGx attendee is reducing their involvement over time and that the average non-EAGX attendee is increasing their involvement over time. This of course does not mean EAGx is not having an effect, it just means that in-aggregate, there is churn on who participates in highly-engaged EA activities.
The effect-size of EAGx could be huge and the above methodology would not measure it, because the effect would only show up for EAs who are just in the process of getting more engaged.
That’s an interesting point: Under this model if EAGx’s don’t matter then we’d expect engagement to decerase for attendees and stable engagement could eb interpeted as a positive effect. A proper cohort analysis could help determine the volatility/churn to give us a baseline and estimate the magnitude of this effect among the sort of people who might attend EAG(x) but didn’t.
That said, I still think that any effect of EAG(x) would presumably be a lot stronger in the 6 months after a conference than in the 6 months after that (/6 months before a conference) so if it had a big effect and engagement of attendees was falling on average than you’d see a bump (or stabilization) in the few months after an event and a bigger decline after that. Though this survey has obvious limitations for detecting that.
What did you mean by the last sentence? Above I’ve assumed that it has an effect not just for new people who are attending a conference for the first time (though my intuition is that this would be bigger) but also in maintaining (on the margin) engagement of repeat attendees. Do you disagree?
I don’t understand this methodology. Shouldn’t we expect most attendees of EAGx events to be roughly at equilibrium with their EA involvement, and even if they are not, shouldn’t we expect the marginal variance explained by EAGx attendance to be very small?
I feel like in order for this study to have returned anything but a null-result, the people who attend EAGx events would have had to become a lot more involved over time. But, on average, in an equilibrium state, we should expect EAGx attendees to be as likely to be increasing their involvement in EA as they are to be decreasing their involvement in EA (otherwise we wouldn’t be at equlibrium and we should expect EAGx attendance to increase each year).
Of course, de-facto EA is not in equilibrium, but I don’t see how this study would be measuring anything but the overall growth rate of EA (which we know from other data is currently relatively low and probably slightly negative). If EA is shrinking overall, then we should expect members of EA (i.e. people who attend EAGx events) to be taking fewer EA-coded actions over time. If EA is growing overall then we should expect members of EA to continue to get more involved.
Whether EA is shrinking or growing is of course an interesting question to ask, but this also feels like a very indirect and noisy way of measuring the EA growth rate.
Thanks for the comment, this is a really strong point.
I think this can make us reasonably confident that the EAGx didn’t make people more engaged on average and even though you already expected this, I think a lot of people did expect EAGs would lead to actively higher engagement among participants. We weren’t trying to measure the EA growth rate of course, we were trying to measure whether the EAGs lead to higher counterfactual engagement among attendees.
The model where an EAG matters could look something like: There are two separate populations of EA: less-engaged members who don’t attend EAGs, and more-engaged members who attend EAGs at least sometimes. And attending an EAG helps push people into being even more engaged and maintains their level of engagement that would otherwise flag. So even if both populations are stable, EAG keeps the high-engagement population more engaged and/or larger.
A similar model where EAG doesn’t matter is that people stay engaged for other reasons and people attend EAG believing incorrectly it will help or as de-facto recreation.
If the first model is true then we should expect EA engagement to be a lot higher in the few months after the conference and gradually fall until at least the few weeks before the conference (and spiking again during/just after the conference). But if the second model is true then any effects on EA engagement from the conference should dissapear quickly, perhaps within a few weeks or even days.
While the survey isn’t perfect for measuring this (6 months is a lot of time for the effects to decay and it would be better for the initial survey would’ve been better weeks before the conference might have been getting people excited) I think it provides significant value since it asks about behavior over the past 6 months in total. You’d expect if the conference had a big effect on maintaining motivation (which averages steady-state across years) that people would donate more, have more connections, attend more events etc. 0-5 months after a conference than 6-12 months after.
Given we don’t see that, it seems harder to argue that EAGs have a big effect on motivation and therefore harder to argue that EAGs play an important role in maintaining the current steady-state motivation and energy of attendees.
It could still be that EAGs matter for other reasons (e.g. a few people get connections that create amazing value) but this seems to provide significant evidence against one major supposed channel of impact.
I agree that it provides a bit of evidence on whether EAGs have a very spiky effect, but I had a relatively low prior on that. My sense is the kind of people who attend EAGs already do a bunch of stuff that connects them to the EA community, and while I do expect a mild peak in the weeks afterwards, 6 months is really far beyond where I expect any measurable effect.
I don’t understand this. Why would it be important for EAGs impact to have a spiky intervention profile? If EAGs and EAGx events are useful in a way where the benefit is spread out so that they form a relatively stable baseline for EA involvement (which honestly seems almost inevitable given the kind of behavior that you were measuring for EA involvement), then we would measure no effect with your methodology.
I agree there wouldn’t be new effects at that point, but we’re asking about total effects over the 6 months before/since the conference. If the connections etc. persist for 6 months then it should show up in the survey and if they have dissapeared within a few months then that indicates these effects of EAGx attendance are short-lived, which presumably makes them far less significant for a person’s EA engagement and impact overall.
If the EAG impacts are spiky enough that they start disspiating substantially within several months (but get re-upped but future attendance) then we should be able to detect a change with our methodology (higher engagement after). You’re right that if the effects persist for many years (and don’t stack much with repeat attendance) then we wouldn’t be able to measure any effect on repeat attendees but this would presume that it isn’t having much impact on repeat attendees anyway. On the other hand, if effects persist for many years then we should be able to detect a strong effect for first-time attendees (though you’d need a bigger sample).
To be more explicit here. We absolutely did not learn whether EAGx events made people more engaged on average. Because overall EA membership behavior is not increasing/EA is not growing it is necessarily the case that the average EAGx attendee is reducing their involvement over time and that the average non-EAGX attendee is increasing their involvement over time. This of course does not mean EAGx is not having an effect, it just means that in-aggregate, there is churn on who participates in highly-engaged EA activities.
The effect-size of EAGx could be huge and the above methodology would not measure it, because the effect would only show up for EAs who are just in the process of getting more engaged.
That’s an interesting point: Under this model if EAGx’s don’t matter then we’d expect engagement to decerase for attendees and stable engagement could eb interpeted as a positive effect. A proper cohort analysis could help determine the volatility/churn to give us a baseline and estimate the magnitude of this effect among the sort of people who might attend EAG(x) but didn’t.
That said, I still think that any effect of EAG(x) would presumably be a lot stronger in the 6 months after a conference than in the 6 months after that (/6 months before a conference) so if it had a big effect and engagement of attendees was falling on average than you’d see a bump (or stabilization) in the few months after an event and a bigger decline after that. Though this survey has obvious limitations for detecting that.
What did you mean by the last sentence? Above I’ve assumed that it has an effect not just for new people who are attending a conference for the first time (though my intuition is that this would be bigger) but also in maintaining (on the margin) engagement of repeat attendees. Do you disagree?