Note/disclaimer: I recently started working on the Events Team (I do communications and impact assessment), and I shared this response with them before posting it (for edits and comments). Still, the thoughts here are mine.
The tl;dr:
I think you bring up some good points. I agree that we could have planned further in advance, been more transparent, and done more to reduce COVID risk at the event. I think these are important points, and I’ll discuss them below.
But in the end, I think it was right to increase the size of EAG: it roughly doubled the value produced by the event, and I think the increase in COVID risk was small compared to this benefit. I also think that there are some errors in your post. Finally, I want to note that our survey data suggests most attendees were happy with how we handled this situation.
So let’s get into it.
We should have put more effort into anticipating the quality of applications than we did (but we’re also not sure if we could have done that much better)
One of the claims that you make in this vein is:
“Expanding after the fact was substantially worse than deciding on a larger conference from the beginning.”
This is true. Expanding capacity so late into the whole process surprised our attendees and speakers with last-minute changes and overwhelmed the team, impeding our ability to perfect various elements of the conference. You accurately note that “only deciding to expand the event a month in advance reduced CEA’s (and other actors’) ability to make changes to the event to compensate for the increased size.”
The key question, though, is whether we could have realized that we would need to make this change sooner. So on to your other point:
“The decision to expand was based on unsurprising information that should have been taken into account in advance.”
I think this is somewhat true, although I also think hindsight is particularly strong here. There were a number of difficult-to-predict factors at play. For instance, we provided more funding and support (for travel and accommodation) than we had ever before, which we guessed would increase attendance. But on the other hand, we thought that COVID caution might reduce the number of applications. I don’t think we had great data on how these factors would balance out.
Finally, just to clarify; the major surprise was not the number of applications, but rather their quality, which became clear after applications opened in September. In retrospect, we wish we had opened applications sooner.
Still, we should have spent more time trying to predict the number of great applications we’d get, and we generally could have been more careful with that process.
So I agree that in the future we should do better modeling, use forecasting for our planning, and be more aware of broad changes to the community. We’ll try to make explicit plans for being better at this in the future, and we welcome advice on how to do this.
It would have been better if we were more transparent about our reasoning
“The decision to expand the event was underjustified and untransparent.”
I agree.
To be clear, though, we were not trying to be untransparent: we were just busy with other things and didn’t prioritize this. At that point, it seemed that making sure that the schedule would make sense, that volunteers would know what to do, or that registration would happen was more urgent than writing up and publishing our thought process.
But I agree that by not doing this, our decisions may have seemed mysterious and underjustified. We didn’t explain what had changed since our 2019 posts, and this may not have been the right choice.
We made mistakes when attempting to make EA Global as safe as possible for attendees.
“CEA undercompensated for the increase in risk.”
Here are some things we did to make the event safer:
We required all attendees to be fully vaccinated (except a few minors). We checked the proof of vaccination.
Guest-facing external contractors were vaccinated. In some cases, vendors informed us we were legally prohibited from asking about their staff’s vaccination status; in those cases, we spent money and effort to make sure that they were taking regular lateral flow tests.
We made over 1000 lateral flow tests available for free to attendees, and required that attendees do a lateral flow test before attending (on an honor system).
We had thousands of masks onsite (though most attendees chose not to use them).
We issued refunds for anyone who felt sick even if they got negative tests.
We asked the venue to increase ventilation and to keep the windows open (see below for what happened here).
We also offered refunds (including for travel/accommodation) to everyone who signed up before we expanded the event, to allow people who weren’t comfortable with the larger event size to change their plans.
We considered some further measures, like checking that everyone completed tests every day. After assessing the risks and consulting with our advisory board, we decided that the benefits of verifying these (rather than trusting people to carry out tests as we asked) didn’t justify the significant costs.
But we made some errors here. For instance, we asked the venue to keep windows open, but they didn’t do this as much as we wanted them to. I think we should have pushed harder during the event for them to do what we had requested.
A note about how COVID risk relates to the decision to expand the conference. According to our model of COVID risk, the key factor in risk of infection is how densely packed attendees are. As we doubled the capacity, we also approximately doubled the size of the space available for lunch (and made sure that attendees actually spread out into this space) and other sessions. We did this by reserving a second venue (a good chunk of the Barbican) and adding the marquees for one-on-ones. So I think that although the risk increased a bit, it didn’t increase by that much (substantially less than by 100%).
The decision itself
It’s September and we’ve realized that the quality of applications we’re getting is far higher than we expected. We’re faced with the decision described above: do we grit our teeth and stick to our original plan, or increase the capacity two-fold?
When I first heard the proposal to expand the conference size by around 100% (the first proposal was actually a bit smaller), I thought it was insane. I imagined the chaos during registration, the confused emails, and, indeed, the increased COVID risk. We started brainstorming ways that might help us expand capacity without increasing COVID risk, and came up with the marquees and the second venue. I still thought the event would end up worse.
But then during a conversation with Amy, I realized that I was thinking from the point of view of a single attendee— someone admitted in the first wave of applications to an event for 500 people, who would suddenly be introduced to a more confusing conference for 1000. I could viscerally feel the loss in quality.
What I wasn’t imagining or feeling was the situation of an attendee who could only attend if we expanded capacity. They would be stripped of the chance to solicit feedback on their new animal welfare initiative, recruit candidates for their growing AI safety startup, get feedback on their big career decision, connect with potential funders or collaborators, or get a better understanding of the most pressing issues in the Global Health and Wellbeing space. This lost impact was invisible to my System One, but it would nevertheless be a massive amount of lost impact.
In other words, I realized that even if the conference were a bit worse for those who had already registered, it might roughly double the total value, by helping more people to attend.
In order to fulfill our obligations to the people who already registered, we offered full refunds (including travel expenses) to anyone who did not want to attend the larger event. (Although I acknowledge that psychological lock-in effects are real.)
I think that we managed the increase in capacity well, and we didn’t significantly worsen the experience relative to a 500 person conference: logistics were smooth, COVID risk wasn’t much higher, and they got access to nearly 500 extra people. In fact, due to the increased value from access to the extra attendees, I think the change might have been net positive for the initial 500 attendees (but we’re not sure overall).
In hindsight, I think the decision was correct.
Survey data
Here is some data from the feedback survey we sent to in-person attendees to back this up.
We asked attendees whether they wanted fewer or more attendees (a multi-part question introduced as “What changes should we make for future conferences?” where the options were “N/A” or “No strong opinion”, Fewer, Keep the same, More).
Here is the result (587 responses):
We also asked “How satisfied were you with pre-conference communication?,” Of 596 responses, here are the results.
We asked “How satisfied were you with our COVID policy?” Results (593 responses):
[EDIT: Note that we plan to follow up with people who applied and were accepted but could not attend or cancelled their registration, and their responses to the above question may be different from those of the people who did come to EA Global. I’m guessing that in aggregate, the effect will be small due to the numbers (most people who were accepted did attend), but I will also consider those cases separately.]
We asked “How satisfied were you with COVID safety at EA Global in practice?” 595 responses. Result:
The average response for “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?” (on a scale from 0-10) is 9.11. For reference, this was 8.2 for EA Global: San Francisco 2019, 8.5 for EA Global: London 2019, and 7.8 for EA Global: Virtual in 2020.
I don’t add all of this here to claim that we did everything perfectly. I don’t think we did, and we plan to interview attendees with different perspectives to figure out how exactly we should improve in the future, both with respect to COVID and with respect to other aspects of running conferences.
But I do want to say that we took the decision to expand the conference (as well as the COVID implications of this decision) seriously, and I think it was the right decision.
In any case, I want to thank you for posting this on the Forum. I’m sorry for how these changes affected you, and I think it’s good for these criticisms to be discussed publicly.
I anticipate that some people who were excluded from participation due to covid policy are unhappy with it and are not counted in the survey. I know at least a few people who were accepted but were not able to participate because being vaccinated with Sputnik V is not enough. That said, the EAG coronavirus policy is understandable; and people are unhappy in a different way than OP.
This is true, thank you for pointing it out! I plan on following up with people who could not attend for a bunch of different reasons, and this is one such reason. I really do appreciate you pointing it out, though, since it could be a real confounder, and I’m editing a note in my original response to that end.
I think that we managed the increase in capacity well, and we didn’t significantly worsen the experience relative to a 500 person conference: logistics were smooth, COVID risk wasn’t much higher, and they got access to nearly 500 extra people. In fact, due to the increased value from access to the extra attendees, I think the change might have been net positive for the initial 500 attendees (but we’re not sure overall).
Finally, I want to point this out as something I agree is correct, and is pretty remarkable given the short timeframe and lack of staff. I encountered ~no logistical problems arising from the last-minute doubling in size, which is definitely a testament to the team’s hard work and ability (and a decent argument against some of my more demanding claims).
Here are some things we did to make the event safer [...]
I think this is a complete list of everything that was done, before and after the decision to expand. Is that correct? If so, what’s the subset of things that were done to compensate for expansion?
According to our model of COVID risk, the key factor in risk of infection is how densely packed attendees are. As we doubled the capacity, we also approximately doubled the size of the space available for lunch (and made sure that attendees actually spread out into this space) and other sessions. We did this by reserving a second venue (a good chunk of the Barbican) and adding the marquees for one-on-ones. So I think that although the risk increased a bit, it didn’t increase by that much (substantially less than by 100%).
Yeah, I should have noted that the marquees did increase safety by reducing density, even if they weren’t safer per square meter than the main event. I do think, though, that the impression given was that the marquees would be relativelysafer places to meet for those uncomfortable with the indoor spaces, and I don’t think that was the case. Do you agree? (It’s fine if you agree but also think it wasn’t that important.)
We considered some further measures, like checking that everyone completed tests every day. After assessing the risks and consulting with our advisory board, we decided that the benefits of verifying these (rather than trusting people to carry out tests as we asked) didn’t justify the significant costs.
Seems like if you’re already on the honour system, you might as well ask people to take a test every day?
I think this is a complete list of everything that was done, before and after the decision to expand. Is that correct? If so, what’s the subset of things that were done to compensate for expansion?
Some of the things I listed were added to the COVID Protocol only after the decision to expand. One example of this is that we weren’t planning on requiring tests before we decided to expand. We also decided to closely monitor the number of people who were present in any given room, especially during lunch, when we thought more people would be unmasked. (Very few of the attendees ended up using masks regularly, which we did not expect, so lunch was not as unique as we thought it would be.) I think the most important thing we did to compensate for the expansion, however, was to add the second venue and the marquees (as I mentioned before).
I do think, though, that the impression given was that the marquees would be relatively safer places to meet
We indeed thought that the marquees would be more ventilated than they were. I agree that this is the impression we communicated to attendees.
Seems like if you’re already on the honour system, you might as well ask people to take a test every day?
I’m not sure how this follows, but you can probably make the argument that we should have pushed people to take tests more frequently. I am undecided on this, and we discussed this option. We ended up encouraging people to take frequent tests but not forcing it. (I don’t really know what kind of response to this point you’re looking for.)
I’m going to separate some of my responses into separate comment threads here, since there’s a lot to unpack.
First, and most importantly:
I also think that there are some errors in your post.
Can you enumerate what these are? It’s not really clear to me from your response, though I may have missed it. (I’m assuming we’re both operating on a model where “errors” are not the same thing as “disagreements”.)
I don’t have time to go into great depth for every one of your questions, but I’ll try to give quick replies to as many as I can.
It’s fair to separate errors from disagreements, and I focused mostly on disagreements in my original reply. (There are more things that should be classified as disagreements than should be classified as errors, and I think that the disagreements in this case mattered more to the discussion, which is why I focused on them. It’s possible I should have deleted the “some errors” line in my reply once I drafted it.) Things I think are errors on your part:
You write: “the fact that there was a large waiting list for this conference was not at all surprising; I think I would have given an ex ante probability of over 90%. I’d be very surprised if the events team wasn’t also expecting this.”
The Events Team did not expect this. (You phrase it as a prediction that you would be surprised to learn that this was the case, so I don’t have a way of knowing if this is in fact an error. But I think at the core, the statement is that you suspect that we expected this. We didn’t.)
You write: “As with most COVID-related decisions made by the CEA events team over the course of the pandemic, the change in policy was justified solely by an appeal to authority – namely, that they consulted with their COVID advisory board.”
This isn’t correct — neither for the recent event nor for EA Global: SF 2020 (the other conference where our plans were substantially changed by COVID).
In the email where we offered refunds due to the change of plans, we shared our reason for expanding the conference: “We’ve received hundreds of applications for the event, and we currently have over 300 exceptional candidates on our waiting list. We really want these people to have a chance to attend, but we can only invite them if we increase our capacity.”
We were also concerned about the potential increase in risk and understood that some of the people who had registered might not want to attend as a result, and thus offered refunds and other resources they could use to make their decision (e.g. microCOVID).
One sentence in our email read: “After consulting our COVID Advisory Board, we’ve decided to increase this cap to allow those 300 people to join us in London”. This doesn’t mean that the board was our _only _source of guidance — just that consulting them was one necessary step we took before deciding to increase the cap.
For reference, the Events Team posted this about the decision in 2020. (Note that the composition of the COVID Board has changed since then.) This post isn’t just summarizing what the advisory board said — it also includes a lot of the team’s reasoning.
You write: “Apart from those few people privileged to have close links with CEA or the Oxford office, most attendees didn’t have access to the reasoning used to make the decision to expand. The arguments and data used to make that consequential decision should have been made available to the community, so that we could evaluate whether we found them convincing, challenge them if not, and update our beliefs and plans accordingly. This was not done.”
To be clear: none of the CEA Events Team was based in Oxford when this decision was made.
It’s true that we did not publish the reasoning. We did this because writing something that requires that kind of polish and care would have taken bandwidth and hours our team did (and still does) not have. We did however write a memo about this decision, which we shared with 40 community leaders (with a request for feedback), and had conversations about this with around a dozen people who attended the EA Meta Coordination Forum. We received overwhelmingly positive feedback on this decision.
I’d also like to add that while in general, I agree that transparency is good, I think you and I have pretty different pictures of what info would be most useful to the community here.
The info we prioritized getting to the community was what we thought would help them make decisions: whether to still come to the conference given the expanded size, and later the breakdown of microCOVID estimates by activity, which people could use to think about having more meetings outdoors, etc.
It sounds like you think it’s particularly important for us to share the specifics of the reasoning behind expanding the conference, beyond the points we’ve already shared. I don’t fully understand what you think would be useful to the community about having these particular details. There’s a lot of reasoning we could write up about each conference: Why does it cost this much? Why hold events in these cities and not those cities? Why have two conferences and not more? Why is this kind of food served?
Discussions and debates on these topics can be useful; sometimes, pushback from the community has helped CEA (and other organizations) recognize things we should be doing differently. But to publish all the reasoning behind all these things as it changes over time would take time that we can’t then use to actually produce events. (Especially as this work has to involve the people running the events, whose time is scarcer and harder to replace than contractor time.)
I think some of the disconnect/confusion in the follow-on discussion here is being caused by my failing to update from a model of “CEA made a difficult COVID cost/benefit calculation” to one of “CEA didn’t think COVID was a major concern” sufficiently quickly/completely. (Most of my post was written under the former model.)
I need to think more about that before I try to clarify my position here, because I’m not 100% sure what it is.
I think this is somewhat true, although I also think hindsight is particularly strong here. There were a number of difficult-to-predict factors at play. For instance, we provided more funding and support (for travel and accommodation) than we had ever before, which we guessed would increase attendance. But on the other hand, we thought that COVID caution might reduce the number of applications. I don’t think we had great data on how these factors would balance out.
One important thing that came out of my discussions with CEA prior to the event was that many high-priority attendees applied later than expected, after many tickets had already been assigned. This seems like it significantly strengthened the case for expansion – but also suggests that the process by which tickets were assigned early was quite flawed. Do you agree with that assessment?
In retrospect, we wish we had opened applications sooner.
Yup, this seems like it would have helped a lot on all fronts. What was the reason for opening applications so late? COVID uncertainty?
To be clear, though, we were not trying to be untransparent: we were just busy with other things and didn’t prioritize this. At that point, it seemed that making sure that the schedule would make sense, that volunteers would know what to do, or that registration would happen was more urgent than writing up and publishing our thought process.
Yeah, I think this is the crux here. Especially post-event, the untransparency aspect is the part of my criticisms that I care about most, and if I were persuaded they were wrong I’d have a lot less to say.
Some background that didn’t come across in the post is that, given CEA’s place in the community and the kinds of projects it takes up (and the impact of that on others in the community who might want to do similar projects), I think it’s very important that CEA in particular is highly transparent about the decisions it makes and why it makes them – to the point, if necessary, of investing in extra capacity to make this extra transparency possible. I think this is important both to make sure the core functions that CEA carries out are being done as well as possible, and to let other orgs learn from CEA’s lessons, updates and mistakes. I wouldn’t apply the same standards of transparency to all EA orgs (though I think we should generally be aiming for high transparency in most areas).
That said, I do think there was quite a bit that could have been done to increase transparency relatively easily, including (in escalating order of effort):
Including a few sentences and links in the expansion notification post, explaining and supporting the key updates underlying the change.
Writing a short Forum post summarising the various short arguments that were later made in private to me during feedback on this post.
Publishing any quick-and-dirty models and other materials CEA used to make the decision to expand.
Publishing redacted or summarised versions of the advice CEA received from its COVID advisory board (I think this should have been done for ~all COVID decisions made by the CEA events team).
It’s possible I’m underestimating the amount of work some of these would have required; in that case, though, now (i.e. after the event) seems like a great time to write that up more carefully for publication.
Beyond the question of transparency around materials, though, I’m also very interested in transparency around process. What was the process by which CEA decided to expand? What sorts of evidence were gathered? How heavily did COVID weigh in this decision? Did CEA make quantitative estimates of the COVID risk of EAG attendees, and the effect of the changes in size on this? (If so, those seem like things that would have been fairly easy to share.)
I think it’s very important that CEA in particular is highly transparent about the decisions it makes and why it makes them – to the point, if necessary, of investing in extra capacity to make this extra transparency possible.
I agree, for many forms of transparency (like information people need to make decisions), though less so for other forms (our internal reasoning about event management). We’re doing a lot right now to scale up our capacity.
“ I do think there was quite a bit that could have been done to increase transparency relatively easily, including (in escalating order of effort) [...] It’s possible I’m underestimating the amount of work some of these would have required”
My guess is that you are underestimating the amount of work—and especially mental energy and foresight— required for this, or perhaps you’re overestimating how much free time we had while planning the conference. It’s also plausible that doing some or any of the things you list was genuinely a good idea at the time, and something we should have done, even at the cost of spending less time on other priorities. If that’s true, we didn’t realize it at the time.
“now (i.e. after the event) seems like a great time to write that up more carefully for publication.”
The Events Team is actually quite busy after EA Global. (As we mentioned, we’re working on hiring/onboarding — and there’s always another event to plan.) We may still end up writing and publishing something to this effect, although it would probably be less focused on this specific decision than what you suggest.
What was the process by which CEA decided to expand? What sorts of evidence were gathered? How heavily did COVID weigh in this decision?
This decision was made over the span of approximately a week, after a bunch of meetings and ad hoc conversations. We consulted several experts, including our COVID Board, and we went through several iterations of an expansion design. I don’t have time to carefully write out all the things that happened in this process.
Note/disclaimer: I recently started working on the Events Team (I do communications and impact assessment), and I shared this response with them before posting it (for edits and comments). Still, the thoughts here are mine.
The tl;dr:
I think you bring up some good points. I agree that we could have planned further in advance, been more transparent, and done more to reduce COVID risk at the event. I think these are important points, and I’ll discuss them below.
But in the end, I think it was right to increase the size of EAG: it roughly doubled the value produced by the event, and I think the increase in COVID risk was small compared to this benefit. I also think that there are some errors in your post. Finally, I want to note that our survey data suggests most attendees were happy with how we handled this situation.
So let’s get into it.
We should have put more effort into anticipating the quality of applications than we did (but we’re also not sure if we could have done that much better)
One of the claims that you make in this vein is:
“Expanding after the fact was substantially worse than deciding on a larger conference from the beginning.”
This is true. Expanding capacity so late into the whole process surprised our attendees and speakers with last-minute changes and overwhelmed the team, impeding our ability to perfect various elements of the conference. You accurately note that “only deciding to expand the event a month in advance reduced CEA’s (and other actors’) ability to make changes to the event to compensate for the increased size.”
The key question, though, is whether we could have realized that we would need to make this change sooner. So on to your other point:
“The decision to expand was based on unsurprising information that should have been taken into account in advance.”
I think this is somewhat true, although I also think hindsight is particularly strong here. There were a number of difficult-to-predict factors at play. For instance, we provided more funding and support (for travel and accommodation) than we had ever before, which we guessed would increase attendance. But on the other hand, we thought that COVID caution might reduce the number of applications. I don’t think we had great data on how these factors would balance out.
Finally, just to clarify; the major surprise was not the number of applications, but rather their quality, which became clear after applications opened in September. In retrospect, we wish we had opened applications sooner.
Still, we should have spent more time trying to predict the number of great applications we’d get, and we generally could have been more careful with that process.
So I agree that in the future we should do better modeling, use forecasting for our planning, and be more aware of broad changes to the community. We’ll try to make explicit plans for being better at this in the future, and we welcome advice on how to do this.
It would have been better if we were more transparent about our reasoning
“The decision to expand the event was underjustified and untransparent.”
I agree.
To be clear, though, we were not trying to be untransparent: we were just busy with other things and didn’t prioritize this. At that point, it seemed that making sure that the schedule would make sense, that volunteers would know what to do, or that registration would happen was more urgent than writing up and publishing our thought process.
But I agree that by not doing this, our decisions may have seemed mysterious and underjustified. We didn’t explain what had changed since our 2019 posts, and this may not have been the right choice.
We made mistakes when attempting to make EA Global as safe as possible for attendees.
“CEA undercompensated for the increase in risk.”
Here are some things we did to make the event safer:
We required all attendees to be fully vaccinated (except a few minors). We checked the proof of vaccination.
Guest-facing external contractors were vaccinated. In some cases, vendors informed us we were legally prohibited from asking about their staff’s vaccination status; in those cases, we spent money and effort to make sure that they were taking regular lateral flow tests.
We made over 1000 lateral flow tests available for free to attendees, and required that attendees do a lateral flow test before attending (on an honor system).
We had thousands of masks onsite (though most attendees chose not to use them).
We issued refunds for anyone who felt sick even if they got negative tests.
We asked the venue to increase ventilation and to keep the windows open (see below for what happened here).
(Here’s our COVID protocol.)
We also offered refunds (including for travel/accommodation) to everyone who signed up before we expanded the event, to allow people who weren’t comfortable with the larger event size to change their plans.
We considered some further measures, like checking that everyone completed tests every day. After assessing the risks and consulting with our advisory board, we decided that the benefits of verifying these (rather than trusting people to carry out tests as we asked) didn’t justify the significant costs.
But we made some errors here. For instance, we asked the venue to keep windows open, but they didn’t do this as much as we wanted them to. I think we should have pushed harder during the event for them to do what we had requested.
A note about how COVID risk relates to the decision to expand the conference. According to our model of COVID risk, the key factor in risk of infection is how densely packed attendees are. As we doubled the capacity, we also approximately doubled the size of the space available for lunch (and made sure that attendees actually spread out into this space) and other sessions. We did this by reserving a second venue (a good chunk of the Barbican) and adding the marquees for one-on-ones. So I think that although the risk increased a bit, it didn’t increase by that much (substantially less than by 100%).
The decision itself
It’s September and we’ve realized that the quality of applications we’re getting is far higher than we expected. We’re faced with the decision described above: do we grit our teeth and stick to our original plan, or increase the capacity two-fold?
When I first heard the proposal to expand the conference size by around 100% (the first proposal was actually a bit smaller), I thought it was insane. I imagined the chaos during registration, the confused emails, and, indeed, the increased COVID risk. We started brainstorming ways that might help us expand capacity without increasing COVID risk, and came up with the marquees and the second venue. I still thought the event would end up worse.
But then during a conversation with Amy, I realized that I was thinking from the point of view of a single attendee— someone admitted in the first wave of applications to an event for 500 people, who would suddenly be introduced to a more confusing conference for 1000. I could viscerally feel the loss in quality.
What I wasn’t imagining or feeling was the situation of an attendee who could only attend if we expanded capacity. They would be stripped of the chance to solicit feedback on their new animal welfare initiative, recruit candidates for their growing AI safety startup, get feedback on their big career decision, connect with potential funders or collaborators, or get a better understanding of the most pressing issues in the Global Health and Wellbeing space. This lost impact was invisible to my System One, but it would nevertheless be a massive amount of lost impact.
In other words, I realized that even if the conference were a bit worse for those who had already registered, it might roughly double the total value, by helping more people to attend.
In order to fulfill our obligations to the people who already registered, we offered full refunds (including travel expenses) to anyone who did not want to attend the larger event. (Although I acknowledge that psychological lock-in effects are real.)
I think that we managed the increase in capacity well, and we didn’t significantly worsen the experience relative to a 500 person conference: logistics were smooth, COVID risk wasn’t much higher, and they got access to nearly 500 extra people. In fact, due to the increased value from access to the extra attendees, I think the change might have been net positive for the initial 500 attendees (but we’re not sure overall).
In hindsight, I think the decision was correct.
Survey data
Here is some data from the feedback survey we sent to in-person attendees to back this up.
We asked attendees whether they wanted fewer or more attendees (a multi-part question introduced as “What changes should we make for future conferences?” where the options were “N/A” or “No strong opinion”, Fewer, Keep the same, More).
Here is the result (587 responses):
We also asked “How satisfied were you with pre-conference communication?,” Of 596 responses, here are the results.
We asked “How satisfied were you with our COVID policy?” Results (593 responses):
[EDIT: Note that we plan to follow up with people who applied and were accepted but could not attend or cancelled their registration, and their responses to the above question may be different from those of the people who did come to EA Global. I’m guessing that in aggregate, the effect will be small due to the numbers (most people who were accepted did attend), but I will also consider those cases separately.]
We asked “How satisfied were you with COVID safety at EA Global in practice?” 595 responses. Result:
The average response for “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?” (on a scale from 0-10) is 9.11. For reference, this was 8.2 for EA Global: San Francisco 2019, 8.5 for EA Global: London 2019, and 7.8 for EA Global: Virtual in 2020.
I don’t add all of this here to claim that we did everything perfectly. I don’t think we did, and we plan to interview attendees with different perspectives to figure out how exactly we should improve in the future, both with respect to COVID and with respect to other aspects of running conferences.
But I do want to say that we took the decision to expand the conference (as well as the COVID implications of this decision) seriously, and I think it was the right decision.
In any case, I want to thank you for posting this on the Forum. I’m sorry for how these changes affected you, and I think it’s good for these criticisms to be discussed publicly.
I anticipate that some people who were excluded from participation due to covid policy are unhappy with it and are not counted in the survey. I know at least a few people who were accepted but were not able to participate because being vaccinated with Sputnik V is not enough. That said, the EAG coronavirus policy is understandable; and people are unhappy in a different way than OP.
This is true, thank you for pointing it out! I plan on following up with people who could not attend for a bunch of different reasons, and this is one such reason. I really do appreciate you pointing it out, though, since it could be a real confounder, and I’m editing a note in my original response to that end.
Finally, I want to point this out as something I agree is correct, and is pretty remarkable given the short timeframe and lack of staff. I encountered ~no logistical problems arising from the last-minute doubling in size, which is definitely a testament to the team’s hard work and ability (and a decent argument against some of my more demanding claims).
On event safety
I think this is a complete list of everything that was done, before and after the decision to expand. Is that correct? If so, what’s the subset of things that were done to compensate for expansion?
Yeah, I should have noted that the marquees did increase safety by reducing density, even if they weren’t safer per square meter than the main event. I do think, though, that the impression given was that the marquees would be relatively safer places to meet for those uncomfortable with the indoor spaces, and I don’t think that was the case. Do you agree? (It’s fine if you agree but also think it wasn’t that important.)
Seems like if you’re already on the honour system, you might as well ask people to take a test every day?
Some of the things I listed were added to the COVID Protocol only after the decision to expand. One example of this is that we weren’t planning on requiring tests before we decided to expand. We also decided to closely monitor the number of people who were present in any given room, especially during lunch, when we thought more people would be unmasked. (Very few of the attendees ended up using masks regularly, which we did not expect, so lunch was not as unique as we thought it would be.) I think the most important thing we did to compensate for the expansion, however, was to add the second venue and the marquees (as I mentioned before).
We indeed thought that the marquees would be more ventilated than they were. I agree that this is the impression we communicated to attendees.
I’m not sure how this follows, but you can probably make the argument that we should have pushed people to take tests more frequently. I am undecided on this, and we discussed this option. We ended up encouraging people to take frequent tests but not forcing it. (I don’t really know what kind of response to this point you’re looking for.)
I’m going to separate some of my responses into separate comment threads here, since there’s a lot to unpack.
First, and most importantly:
Can you enumerate what these are? It’s not really clear to me from your response, though I may have missed it. (I’m assuming we’re both operating on a model where “errors” are not the same thing as “disagreements”.)
I don’t have time to go into great depth for every one of your questions, but I’ll try to give quick replies to as many as I can.
It’s fair to separate errors from disagreements, and I focused mostly on disagreements in my original reply. (There are more things that should be classified as disagreements than should be classified as errors, and I think that the disagreements in this case mattered more to the discussion, which is why I focused on them. It’s possible I should have deleted the “some errors” line in my reply once I drafted it.) Things I think are errors on your part:
You write: “the fact that there was a large waiting list for this conference was not at all surprising; I think I would have given an ex ante probability of over 90%. I’d be very surprised if the events team wasn’t also expecting this.”
The Events Team did not expect this. (You phrase it as a prediction that you would be surprised to learn that this was the case, so I don’t have a way of knowing if this is in fact an error. But I think at the core, the statement is that you suspect that we expected this. We didn’t.)
You write: “As with most COVID-related decisions made by the CEA events team over the course of the pandemic, the change in policy was justified solely by an appeal to authority – namely, that they consulted with their COVID advisory board.”
This isn’t correct — neither for the recent event nor for EA Global: SF 2020 (the other conference where our plans were substantially changed by COVID).
In the email where we offered refunds due to the change of plans, we shared our reason for expanding the conference: “We’ve received hundreds of applications for the event, and we currently have over 300 exceptional candidates on our waiting list. We really want these people to have a chance to attend, but we can only invite them if we increase our capacity.”
We were also concerned about the potential increase in risk and understood that some of the people who had registered might not want to attend as a result, and thus offered refunds and other resources they could use to make their decision (e.g. microCOVID).
One sentence in our email read: “After consulting our COVID Advisory Board, we’ve decided to increase this cap to allow those 300 people to join us in London”. This doesn’t mean that the board was our _only _source of guidance — just that consulting them was one necessary step we took before deciding to increase the cap.
For reference, the Events Team posted this about the decision in 2020. (Note that the composition of the COVID Board has changed since then.) This post isn’t just summarizing what the advisory board said — it also includes a lot of the team’s reasoning.
You write: “Apart from those few people privileged to have close links with CEA or the Oxford office, most attendees didn’t have access to the reasoning used to make the decision to expand. The arguments and data used to make that consequential decision should have been made available to the community, so that we could evaluate whether we found them convincing, challenge them if not, and update our beliefs and plans accordingly. This was not done.”
To be clear: none of the CEA Events Team was based in Oxford when this decision was made.
It’s true that we did not publish the reasoning. We did this because writing something that requires that kind of polish and care would have taken bandwidth and hours our team did (and still does) not have. We did however write a memo about this decision, which we shared with 40 community leaders (with a request for feedback), and had conversations about this with around a dozen people who attended the EA Meta Coordination Forum. We received overwhelmingly positive feedback on this decision.
I’d also like to add that while in general, I agree that transparency is good, I think you and I have pretty different pictures of what info would be most useful to the community here.
The info we prioritized getting to the community was what we thought would help them make decisions: whether to still come to the conference given the expanded size, and later the breakdown of microCOVID estimates by activity, which people could use to think about having more meetings outdoors, etc.
It sounds like you think it’s particularly important for us to share the specifics of the reasoning behind expanding the conference, beyond the points we’ve already shared. I don’t fully understand what you think would be useful to the community about having these particular details. There’s a lot of reasoning we could write up about each conference: Why does it cost this much? Why hold events in these cities and not those cities? Why have two conferences and not more? Why is this kind of food served?
Discussions and debates on these topics can be useful; sometimes, pushback from the community has helped CEA (and other organizations) recognize things we should be doing differently. But to publish all the reasoning behind all these things as it changes over time would take time that we can’t then use to actually produce events. (Especially as this work has to involve the people running the events, whose time is scarcer and harder to replace than contractor time.)
I think some of the disconnect/confusion in the follow-on discussion here is being caused by my failing to update from a model of “CEA made a difficult COVID cost/benefit calculation” to one of “CEA didn’t think COVID was a major concern” sufficiently quickly/completely. (Most of my post was written under the former model.)
I need to think more about that before I try to clarify my position here, because I’m not 100% sure what it is.
On admissions
One important thing that came out of my discussions with CEA prior to the event was that many high-priority attendees applied later than expected, after many tickets had already been assigned. This seems like it significantly strengthened the case for expansion – but also suggests that the process by which tickets were assigned early was quite flawed. Do you agree with that assessment?
Yup, this seems like it would have helped a lot on all fronts. What was the reason for opening applications so late? COVID uncertainty?
On transparency
Yeah, I think this is the crux here. Especially post-event, the untransparency aspect is the part of my criticisms that I care about most, and if I were persuaded they were wrong I’d have a lot less to say.
Some background that didn’t come across in the post is that, given CEA’s place in the community and the kinds of projects it takes up (and the impact of that on others in the community who might want to do similar projects), I think it’s very important that CEA in particular is highly transparent about the decisions it makes and why it makes them – to the point, if necessary, of investing in extra capacity to make this extra transparency possible. I think this is important both to make sure the core functions that CEA carries out are being done as well as possible, and to let other orgs learn from CEA’s lessons, updates and mistakes. I wouldn’t apply the same standards of transparency to all EA orgs (though I think we should generally be aiming for high transparency in most areas).
That said, I do think there was quite a bit that could have been done to increase transparency relatively easily, including (in escalating order of effort):
Including a few sentences and links in the expansion notification post, explaining and supporting the key updates underlying the change.
Writing a short Forum post summarising the various short arguments that were later made in private to me during feedback on this post.
Publishing any quick-and-dirty models and other materials CEA used to make the decision to expand.
Publishing redacted or summarised versions of the advice CEA received from its COVID advisory board (I think this should have been done for ~all COVID decisions made by the CEA events team).
It’s possible I’m underestimating the amount of work some of these would have required; in that case, though, now (i.e. after the event) seems like a great time to write that up more carefully for publication.
Beyond the question of transparency around materials, though, I’m also very interested in transparency around process. What was the process by which CEA decided to expand? What sorts of evidence were gathered? How heavily did COVID weigh in this decision? Did CEA make quantitative estimates of the COVID risk of EAG attendees, and the effect of the changes in size on this? (If so, those seem like things that would have been fairly easy to share.)
I agree, for many forms of transparency (like information people need to make decisions), though less so for other forms (our internal reasoning about event management). We’re doing a lot right now to scale up our capacity.
My guess is that you are underestimating the amount of work—and especially mental energy and foresight— required for this, or perhaps you’re overestimating how much free time we had while planning the conference. It’s also plausible that doing some or any of the things you list was genuinely a good idea at the time, and something we should have done, even at the cost of spending less time on other priorities. If that’s true, we didn’t realize it at the time.
The Events Team is actually quite busy after EA Global. (As we mentioned, we’re working on hiring/onboarding — and there’s always another event to plan.) We may still end up writing and publishing something to this effect, although it would probably be less focused on this specific decision than what you suggest.
This decision was made over the span of approximately a week, after a bunch of meetings and ad hoc conversations. We consulted several experts, including our COVID Board, and we went through several iterations of an expansion design. I don’t have time to carefully write out all the things that happened in this process.