Jan 2022 survey of Oxford/Cambridge/Stanford organizers
We surveyed some full-time group organizers on how valuable they’d found various aspects of CEA support, versus support from non-CEA people (GCP, Lightcone, Buck Shlegeris – EAIF, Claire Zabel – Open Phil, EAIF, Stanford residencies). We gave them the option to be anonymous.
We split this up into 13 types of CEA support (UK group leaders retreat, US retreat, calls, etc.), and 8 types of non-CEA support. They rated things on a 1-7 scale, based on how useful they found them.
Ignoring N/As, CEA activities got an average score of 4.2/7. Non-CEA activities got an average score of 5.1/7. Summing up scores (which doesn’t have a clean interpretation), CEA totaled 246 points and non-CEA people (GCP, Icecone (a winter retreat hosted by Lightcone), Stanford team, Cambridge’s online course) totaled 201 points.** This maybe indicates that CEA is providing a wider breadth of less intensely valued services. On the other hand, we asked more detailed questions about CEA’s services so the whole ‘total number’ could be biased upwards.
Looking in more detail at scores, it seems that support calls with CEA staff members were less useful than support calls from non-CEA staff members, retreats were generally more useful, and various forms of funding were quite useful. Different leaders found quite different things useful.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 3.8/7
CEA in person campus visits 4.3/7
GCP 1:1s were rated 4.7/7
1:1s with others (e.g. Claire, Buck) were rated 5⁄7
Retreats/events:
CEA’s summer retreat and EAG London retreats averaged 4.3/7
Icecone averaged 4.9/7
GCP’s summer residency averaged 5.0/7
Stanford’s residencies were 5.5/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 4.1/7
CEA’s Funding for Campus Specialist Interns was rated 5.0/7
EAIF funding was 5.8/7
Other resources:
CEA’s remote community building fellowship was 3.0/7
GCP’s handbook was rated 4.3/7
(CEA) Lifelabs management calls were 4.4/7
GCP’s advice on how to do 1:1s was rated 4.5/7
Cambridge’s online cause specific programs were rated 6.0/7
Overall, this suggests that others provided more targeted, useful support. I think they suggest that CEA did provide some meaningful value to these group leaders, but that it might be better to cede this space to others if others have interest and capacity to take it on.
** Notes on interpreting this: I think we split CEA activities up in a more fine-grained way, which may biased scores for individual activities downwards. I also think that some of these activities (e.g. UK/US retreats) were not aimed at these organizers, but at getting less involved organizers more excited. Also, it might be fine to have low averages, with a lot of things, e.g. if the things you’re providing are really useful to some organizers but useless (and easy to ignore) for other organizers.
Summary: CEA support for earlier stage focus unis group organizers
We surveyed attendees of our January Groups Coordination Summit, both on that particular event, and also on what support had been more generally useful to them.
Key figures:
Participant retreat average /10
7.9
% saying their plans for the next 6 months are better
88%
CEA support average (overall) /10
6.4
Ignoring N/As, a similar gap remains. CEA activities got an average score of 4.8/7. Non-CEA activities got an average score of 5.4/7. The average scores are overall higher – this indicates that earlier stage groups can be more intensively helped by outside support.
Summing-up scores (which doesn’t have a clean interpretation), CEA totaled 297 points and non-CEA people (GCP, Icecone, Stanford team, Cambridge’s online course) totaled 345 points.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 4.0/7
GCP 1:1s were rated 5.0/7
Calls with others (e.g. Claire, Buck) were rated 5.0/7
Retreats/events:
CEA’s summer retreats and EAG London retreats average 4.9/7
Icecone average 5.7/7
Stanford’s residencies were 6.0/7
GCP’s summer residency averaged 6.3/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 5.1/7
CEA’s Funding for Campus Specialist Interns was rated 4.8/7
EAIF funding was 5.9/7
Various forms of Resources:
GCP’s handbook was rated 4.4/7
GCP’s advice on how to do 1:1s was rated 4.8/7
(CEA) Lifelabs management calls were 5.0/7
CEA’s remote community building fellowship was 5.3/7
(CEA) University Group Accelerator Program (UGAP) was rated 5.3/7
Cambridge’s online cause specific programs were rated 5.8/7
For this group, retreats/events seem better when longer and/or focused on a narrow project (Icecone, Summer residency, Stanford residency) compared to our shorter retreats.
Appendix:
Jan 2022 survey of Oxford/Cambridge/Stanford organizers
We surveyed some full-time group organizers on how valuable they’d found various aspects of CEA support, versus support from non-CEA people (GCP, Lightcone, Buck Shlegeris – EAIF, Claire Zabel – Open Phil, EAIF, Stanford residencies). We gave them the option to be anonymous.
We split this up into 13 types of CEA support (UK group leaders retreat, US retreat, calls, etc.), and 8 types of non-CEA support. They rated things on a 1-7 scale, based on how useful they found them.
Ignoring N/As, CEA activities got an average score of 4.2/7. Non-CEA activities got an average score of 5.1/7. Summing up scores (which doesn’t have a clean interpretation), CEA totaled 246 points and non-CEA people (GCP, Icecone (a winter retreat hosted by Lightcone), Stanford team, Cambridge’s online course) totaled 201 points.** This maybe indicates that CEA is providing a wider breadth of less intensely valued services. On the other hand, we asked more detailed questions about CEA’s services so the whole ‘total number’ could be biased upwards.
Looking in more detail at scores, it seems that support calls with CEA staff members were less useful than support calls from non-CEA staff members, retreats were generally more useful, and various forms of funding were quite useful. Different leaders found quite different things useful.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 3.8/7
CEA in person campus visits 4.3/7
GCP 1:1s were rated 4.7/7
1:1s with others (e.g. Claire, Buck) were rated 5⁄7
Retreats/events:
CEA’s summer retreat and EAG London retreats averaged 4.3/7
Icecone averaged 4.9/7
GCP’s summer residency averaged 5.0/7
Stanford’s residencies were 5.5/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 4.1/7
CEA’s Funding for Campus Specialist Interns was rated 5.0/7
EAIF funding was 5.8/7
Other resources:
CEA’s remote community building fellowship was 3.0/7
GCP’s handbook was rated 4.3/7
(CEA) Lifelabs management calls were 4.4/7
GCP’s advice on how to do 1:1s was rated 4.5/7
Cambridge’s online cause specific programs were rated 6.0/7
Overall, this suggests that others provided more targeted, useful support. I think they suggest that CEA did provide some meaningful value to these group leaders, but that it might be better to cede this space to others if others have interest and capacity to take it on.
** Notes on interpreting this: I think we split CEA activities up in a more fine-grained way, which may biased scores for individual activities downwards. I also think that some of these activities (e.g. UK/US retreats) were not aimed at these organizers, but at getting less involved organizers more excited. Also, it might be fine to have low averages, with a lot of things, e.g. if the things you’re providing are really useful to some organizers but useless (and easy to ignore) for other organizers.
Summary: CEA support for earlier stage focus unis group organizers
We surveyed attendees of our January Groups Coordination Summit, both on that particular event, and also on what support had been more generally useful to them.
Key figures:
7.9
88%
6.4
Ignoring N/As, a similar gap remains. CEA activities got an average score of 4.8/7. Non-CEA activities got an average score of 5.4/7. The average scores are overall higher – this indicates that earlier stage groups can be more intensively helped by outside support.
Summing-up scores (which doesn’t have a clean interpretation), CEA totaled 297 points and non-CEA people (GCP, Icecone, Stanford team, Cambridge’s online course) totaled 345 points.
Some more direct comparisons:
1:1s:
CEA 1:1s were rated 4.0/7
GCP 1:1s were rated 5.0/7
Calls with others (e.g. Claire, Buck) were rated 5.0/7
Retreats/events:
CEA’s summer retreats and EAG London retreats average 4.9/7
Icecone average 5.7/7
Stanford’s residencies were 6.0/7
GCP’s summer residency averaged 6.3/7
Funding:
CEA’s revised expense policy/Soldo cards were rated 5.1/7
CEA’s Funding for Campus Specialist Interns was rated 4.8/7
EAIF funding was 5.9/7
Various forms of Resources:
GCP’s handbook was rated 4.4/7
GCP’s advice on how to do 1:1s was rated 4.8/7
(CEA) Lifelabs management calls were 5.0/7
CEA’s remote community building fellowship was 5.3/7
(CEA) University Group Accelerator Program (UGAP) was rated 5.3/7
Cambridge’s online cause specific programs were rated 5.8/7
For this group, retreats/events seem better when longer and/or focused on a narrow project (Icecone, Summer residency, Stanford residency) compared to our shorter retreats.
Thanks for sharing this data. Would it be possible to share the wording of a sample question, e.g. for 1:1s, and how the scoring scale was introduced?