CEA’s events team: capacity building and mistakes
For CEA’s Q3 update, we’re sharing multiple posts on different aspects of our work.
This post was written before EA Global: London 2021, and refers to that event in future tense.
Events enable attendees to make new connections, learn about core concepts, share and discuss new research, and coordinate on projects.
In summary:
We put a lot of effort into building up our team.
We ran two well-received medium-sized events.
We made some significant mistakes with one of these events, which means that the counterfactual impact might have been negative. We plan to reflect on this after EA Global is complete, and we expect to make significant changes to address this.
In the past few weeks (as of 10/26/21), we’ve managed to roughly double the capacity of EA Global: London, in response to an unexpectedly high number of very strong applications. We’re currently focused on this event.
Depending on how EA Global: London goes, we expect to increase the total number of new connections (our key metric) relative to previous years. Optimistically, we might double the number of connections relative to 2019 (though the increase will likely be somewhat lower).
Metric: connections
Past surveys (e.g. Open Phil’s survey) suggest that connections between individuals are the key source of impact from our events. So we focus on the number of new connections we make at our events.
Currently, we calculate this as follows: number of attendees x average number of self-reported new connections. We define a new connection as “someone you feel comfortable reaching out to to ask for a favor”, so this is a relatively high bar. We expect that this is an overestimate, since people who engage a lot with the event are more likely to fill in our post-event survey, and also more likely to make lots of connections. However, we think it allows for some comparison between events and years.
We’re currently working on fine-tuning this metric, finding better ways of measuring it, and thinking through other important outcomes to consider for measuring event outcomes.
Broadly, we think that we’re on track to nearly double new connections compared to 2019. This would be a slight increase from 2020. However, we think this methodology somewhat overestimates growth relative to 2019, and underestimates growth relative to 2020:
We think this methodology overestimates the number of connections at virtual events relative to in-person events[1]; all events in 2020 were virtual, and some events in 2021 were virtual, while all 2019 events were in-person.
2020 figures include EAGx, whereas 2019 and 2021 figures don’t.
See footnote #2 for details on how we came up with our “projected” figure.[2]
Staffing
Overall, we’re currently in a transition period for the team: we’ve made some very strong hires who we are now onboarding, or who are joining later this year. During this onboarding process, we’re still understaffed. But we think the team will be stronger in about 6 months.
Additionally, Amy is transitioning from a role focused on direct work on events to a role more purely focused on management, with support from Max (Executive Director).
We hired
Lizka Vaintrob for the Events Generalist role: we expect her to focus on communications and help out with impact analysis.
Ollie Base as our Community Events Manager. He will help us to develop our overall events portfolio, with a particular focus on expanding our community-led events.
Pat Magcalas as a Personal Assistant for Amy Labenz at the start of October.
Barry Grimes is transitioning to the Happier Lives Institute starting in November. We’re grateful for all of his hard work on events over the past 2.5 years.
We’re working with Anine Andresen (previously a contractor) to increase her hours, in the hope that she might become a full-time team member.
We expect to bring on at least one additional hire in the next 3-6 months. In the meantime, we have hired some additional contractors:
Ivan Burduk to support Admissions and Stewardship
Ashley Lin to support Event Production, starting in October
Picnics
We ran the EA Picnic in San Francisco (July 11).
Outcomes:
191 attendees
Attendees who filled out the survey:
averaged 4.6 new connections formed through the Picnic
rated the Picnic as 8.38/10 on likelihood to recommend (LTR)
We considered not running this event due to capacity limitations, but elected to do so anyway as a means of testing a new format. We’re not sure if this was the right call: it meant we were more stretched in the run-up to EA Global and the Meta Coordination Forum.
We supported two similar events on the US East Coast (in NYC on October 2nd, and in Boston on October 9th), since EAGx East Coast was delayed. However, we think the majority of the impact from those events is attributable to their organizers, who don’t work for CEA.
Meta Coordination Forum
In September, we ran the EA Meta Coordination Forum, which had 40 attendees. The event was focused on facilitating ad hoc collaboration between key people working in the EA “meta” space. (This is a successor to past series of Leaders Forum events.)
So far, feedback from attendees has been positive on average:
At time of writing, we have 22⁄40 responses to the survey.
Median new connections = 4 (Mean = 5.33).
Self-reported comparisons with counterfactual value of time
All but 3 people found that the event was a better use of their time than the counterfactual.
Most of the respondents (9) told us that the time spent at the forum was 3-10x as valuable as the counterfactual.
A rough (and conservative) interpretation of responses gives 10.77x the counterfactual as the average response. This value is similar to that of the last in-person Leaders Forum (2019) and higher than that of the last virtual Leaders Forum (2020).
However, we made some mistakes in the run-up to the event, which meant that some key attendees could not attend (due to US restrictions on who could enter the US). We think that we should have done more contingency planning for the scenario where US restrictions stayed in place longer than we anticipated. We also should have asked attendees for their preferences about timing and location earlier than we did.
We also could have done more to support attendees who were travelling from abroad and dealing with difficult COVID restrictions, and we failed to communicate promptly with some invitees. Less significantly, there were some operational issues onsite: for instance, making snacks easily accessible to attendees and ensuring that food was produced on time.
For this reason, we think that the event was valuable, but that it was probably less valuable (maybe significantly less valuable) than it could have been. We’re still a bit unsure about this, because we think that delaying or moving the event might have caused us to lose different key attendees (and that delays would have delayed some of the learnings/plan changes produced by the event, which might be important).
We think these events are important, and we value attendees’ time very highly. As a result, we plan to reflect carefully on these mistakes, and we are contemplating making some major changes (such as changing who is responsible for the event in the future).
Future events
Most of our focus is on running EA Global: London (in October). We were pleasantly surprised by the number of strong applications. This led us to increase the capacity of the event from around 500 to 1000 in the past few weeks (as of 10/26/21). We also created a concurrent virtual event, which has hundreds of attendees registered. We think that this might roughly double the value of the event, which would be quite exciting. This change does increase the risk of logistical issues, as well as the risk of COVID spread, but we think these risks are outweighed by the additional value, and we’re doing our best to anticipate and mitigate these risks.
The quality of applications also meant that we had to waitlist or reject some very strong applicants, because of limits on how many people we could accommodate even after increasing our capacity. Where possible, we’ve tried to point these applicants to other events or resources, so that they can continue to engage with EA.
We are also supporting EAGx Prague (in December).
The schedule for 2022 is still being developed, but with our increased capacity, we are considering running 3 large EA Global conferences in 2022. Additionally, we will be supporting EAGxBoston, EAGxOxford, EAGxSingapore, EAGxAustralia, and a virtual Student Summit.
Reflections
Overall, the events team is in the middle of a difficult transition, but I expect us to be stronger on the other side.
We’ve had greatly reduced capacity while we focused on hiring and onboarding. We still managed to run multiple events alongside this, but we made some mistakes in those events. We’re hopeful that our investments will pay off, and that we’ll be able to run more and better events in future years.
- ↩︎
We expect virtual events to overreport total numbers of connections relative to in-person events. As mentioned in the body, this figure is calculated as the number of attendees x average number of self-reported new connections. And we expect that this is an overestimate, since people who engage a lot with the event are more likely to fill in the survey, and also more likely to make lots of connections. We expect this effect to be bigger for virtual events, because there’s more variation in how much people engage with the event, and because fewer people complete the survey (causing more bias). Therefore, we expect this method to broadly overestimate the number of connections made in 2020 (and to some extent 2021) versus 2019
- ↩︎
This assumes:
950 people at EAG: London in person, 850 people attending virtually.
Average connection numbers for those are similar to those at past in-person events (8.3) and slightly lower than those at previous virtual events (4, relative to a historic average of 4.4) (respectively).
This gives a total of 950 * 8.3 + 850 * 4 = 7885+3400 = 11,285 connections from EAG.
We also expect around 200 connections from the Coordination Forum, which adds a total of 11,485 to the 6,144 number above.
I’d be curious which survey result you’re thinking of here. Aside from a couple of qualitative responses , I don’t remember a question in the OP survey that I would think addresses this.
To my recollection (which may be mistaken) the OP survey didn’t include the question which more explicitly addresses this, which the EA Survey did.
See the question on whether people received important connections vs important information which we included, here.
To make the contrast for EAG / EAGx clearer. Here are the figures on a single graph, with raw totals rather than percentages of respondent (ordered by connections).
EAG and EAGx are among the top sources for new connections, after personal connections and EA groups (though the gap is quite marked, with those two categories each indicated as being the source of a new connection twice as often, this may be partly explained by more EAs encountering personal connections or local groups than encountering EAG, since many EAs have not attended an EAG).
And EAG and EAGx are more connection-leaning than they are learning new information-leaning. But it might be worth noting that the tendency is not dramatic. Only about 1.6 people report making an important new connection from EAG or EAGx for every 1 person reporting learning something new.
Indeed, virtually no sources seem particularly connection-leaning (the most connection-leaning sources only account for 2x more people indicating an important connection than important information). Moreover, there seems relatively little difference between sources’ propensity to lead to more connections or more information. Even the most learning-leaning sources only account for 3-5x more cases of people interesting they learned something important than that they made an important new connection and those are cases where the source almost necessarily could not lead to a new connection in most cases (most people won’t make a new connection from reading a book or listening to a podcast).
This may suggest that the learning something important vs making a new connection distinction may be of relatively little relevance at the level of individual sources. Instead, it might seem like, in most cases where people could make a connection, it’s also similarly likely they might learn something new (put this way I think this seems very intuitive).
(Of course, it is important to recall that this only refers to the ratio within sources- which might be relevant when considering which metrics are important for different sources- and not differences between sources. Some sources lead to 10x more people making a new connection from them and 10x more people learning something new from them).
It’s also important to bear in mind that this doesn’t speak to the magnitude of the number of new connections or important new things being learned (it could be that the number of important new things each individual learns is typically much higher than the number of new connections each person makes, or vice versa) or the magnitude of the importance of each of these (it could be that the connections are, on average, much more important than new important information or vice versa). However, this is a limitation that applies to OP data as well as far as I know.
(This isn’t a full response to your comment.) I think we were mostly referring to the qualitative data from the OP survey.
Thanks for the update and good luck with the transition! Great to see the new hires.