Appendix:
Jan 2022 survey of Oxford/Cambridge/Stanford organizers
We surveyed some full-time group organizers on how valuable they’d found various aspects of CEA support, versus support from non-CEA people (GCP, Lightcone, Buck Shlegeris – EAIF, Claire Zabel – Open Phil, EAIF, Stanford residencies). We gave them the option to be anonymous.
We split this up into 13 types of CEA support (UK group leaders retreat, US retreat, calls, etc.), and 8 types of non-CEA support. They rated things on a 1-7 scale, based on how useful they found them.
Ignoring N/As, CEA activities got an average score of 4.2/7. Non-CEA activities got an average score of 5.1/7. Summing up scores (which doesn’t have a clean interpretation), CEA totaled 246 points and non-CEA people (GCP, Icecone (a winter retreat hosted by Lightcone), Stanford team, Cambridge’s online course) totaled 201 points.** This maybe indicates that CEA is providing a wider breadth of less intensely valued services. On the other hand, we asked more detailed questions about CEA’s services so the whole ‘total number’ could be biased upwards.
Looking in more detail at scores, it seems that support calls with CEA staff members were less useful than support calls from non-CEA staff members, retreats were generally more useful, and various forms of funding were quite useful. Different leaders found quite different things useful.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 3.8/7
CEA in person campus visits 4.3/7
GCP 1:1s were rated 4.7/7
1:1s with others (e.g. Claire, Buck) were rated 5⁄7
Retreats/events:
CEA’s summer retreat and EAG London retreats averaged 4.3/7
Icecone averaged 4.9/7
GCP’s summer residency averaged 5.0/7
Stanford’s residencies were 5.5/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 4.1/7
CEA’s Funding for Campus Specialist Interns was rated 5.0/7
EAIF funding was 5.8/7
Other resources:
CEA’s remote community building fellowship was 3.0/7
GCP’s handbook was rated 4.3/7
(CEA) Lifelabs management calls were 4.4/7
GCP’s advice on how to do 1:1s was rated 4.5/7
Cambridge’s online cause specific programs were rated 6.0/7
Overall, this suggests that others provided more targeted, useful support. I think they suggest that CEA did provide some meaningful value to these group leaders, but that it might be better to cede this space to others if others have interest and capacity to take it on.
** Notes on interpreting this: I think we split CEA activities up in a more fine-grained way, which may biased scores for individual activities downwards. I also think that some of these activities (e.g. UK/US retreats) were not aimed at these organizers, but at getting less involved organizers more excited. Also, it might be fine to have low averages, with a lot of things, e.g. if the things you’re providing are really useful to some organizers but useless (and easy to ignore) for other organizers.
Summary: CEA support for earlier stage focus unis group organizers
We surveyed attendees of our January Groups Coordination Summit, both on that particular event, and also on what support had been more generally useful to them.
Key figures:
Participant retreat average /10 | 7.9 |
% saying their plans for the next 6 months are better | 88% |
CEA support average (overall) /10 | 6.4 |
Ignoring N/As, a similar gap remains. CEA activities got an average score of 4.8/7. Non-CEA activities got an average score of 5.4/7. The average scores are overall higher – this indicates that earlier stage groups can be more intensively helped by outside support.
Summing-up scores (which doesn’t have a clean interpretation), CEA totaled 297 points and non-CEA people (GCP, Icecone, Stanford team, Cambridge’s online course) totaled 345 points.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 4.0/7
GCP 1:1s were rated 5.0/7
Calls with others (e.g. Claire, Buck) were rated 5.0/7
Retreats/events:
CEA’s summer retreats and EAG London retreats average 4.9/7
Icecone average 5.7/7
Stanford’s residencies were 6.0/7
GCP’s summer residency averaged 6.3/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 5.1/7
CEA’s Funding for Campus Specialist Interns was rated 4.8/7
EAIF funding was 5.9/7
Various forms of Resources:
GCP’s handbook was rated 4.4/7
GCP’s advice on how to do 1:1s was rated 4.8/7
(CEA) Lifelabs management calls were 5.0/7
CEA’s remote community building fellowship was 5.3/7
(CEA) University Group Accelerator Program (UGAP) was rated 5.3/7
Cambridge’s online cause specific programs were rated 5.8/7
For this group, retreats/events seem better when longer and/or focused on a narrow project (Icecone, Summer residency, Stanford residency) compared to our shorter retreats.
Thanks so much for writing this!
I just went through the process of trying to hire an executive assistant in the US so I thought I’d contribute a bit. My sample size is relatively low (I only tried 1-2 executive assistants from ~3 services), but here are some things I found helpful:
* In my experience additional cost --> additional quality. Boldly (the firm we will likely end up using - $50 / hour) seemed to have higher service quality than places like Time Etc. ($30 / hour)
* I tried a series of ‘work tests’. Here are the two I found most predictive:
a) I’m trying to [find an Airbnb here, find a conference center, ship my bags using a remote service—example]. Could you help me figure out the lowest cost provider that has good client reviews? Here are some other criteria that are important (X, Y).
This allowed me to see presentation of information, accuracy, and reasoning.
b) I’m interested in services from [X company]. Could you write an email inquiry about whether they are taking new clients & price? It would be great to try to find a personal email for a staff member.
This allowed me to see basic email composition skills, spelling, and resourcefulness online.
* I found it helpful to test out two companies in parallel for a month, that way I could have a direct head to head comparison. I would ask both executive assistants to perform the work tests above. I would then ask for a new executive assistant from the firm that was performing to a lower quality standard, and keep using the one that was performing well on one off tasks to get more data. I think this particularly worked in my situation because we had some lead time.