Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This is really cool to see! Thanks for sharing this level of detail and transparency from one of the most important EA organizations!
This update and CEA’s plans for 2021 mention the term “highly-ranked university groups” and “focus universities” a lot. Could you clarify what you mean by one or both of those terms? (i.e. are you looking at the top 40 universities globally based on a specific website)? Thanks!
These terms are generally referring to 19 university groups which we give some additional support (e.g. we offer extra 1:1 calls with them, and we pilot some programs with them). This is on top of the support we offer all groups (e.g. online resources, funding for events, 1:1 calls, advice over Slack/email).
The groups are chosen primarily based on the university’s track record of having highly influential graduates (e.g. Nobel prize winners, politicians, major philanthropists). We also place some weight on university rankings, universities in regions with rapidly-growing global influence, the group’s track record, and leader quality.
Current focus university groups in no particular order: Harvard, Swarthmore, Oxford, London School of Economics (LSE), Cambridge, Georgetown, Stanford, Hong Kong University, Yale, Princeton, MIT, Caltech, Berkeley, University of Chicago, Columbia, Penn.
Got it! You listed only 16 universities though, but you mentioned you’re referring to 19 groups. Do some universities have more than one group (i.e. Harvard and Harvard Law?)
Yes, that’s right.
“We have … improved our cybersecurity, and streamlined a number of HR systems.”
Hurray! Well done
Thanks—I’ll pass this on to the people involved!
Thank you for posting this!
Are you able to reveal who this YouTube creator is? I’m surprised by how little EA YouTube content there is aside from recorded talks. I feel like an EA-related Veritasium or Kurzgesagt could be super helpful and popular as per this post.
Thanks for asking. I’m not able to say more at this point about that specific creator.
I think you’re asking a good, implied question I share though: which comms channels would be most promising, for creating or sharing additional EA content?
I’m interested in analysis of those sorts of questions, and see them as part of the strategic comms role we’re hoping to hire for this year.
(I work at CEA).
Would you be able to provide a Net Promoter Score analysis of your Likelihood to Recommend metrics? I find NPS yields different, interesting information from an averaged LTR and should be very straightforward to compute.
Sure! I’ve asked the relevant people to respond with the NPS figures if it’s quick/easy for them to do so, but they might prioritize other things.
Btw, I disagree about how useful NPS is. I think it’s quite a weird metric (with very strong threshold effects between 6⁄7 and 8⁄9, and no discrimination between a 6 and a 1). That’s why we switched to the mean. I do think that looking at a histogram is often useful though- in most cases the mean doesn’t give you a strong sense of the distribution.
Thanks! I guess I think NPS is useful precisely because of those threshold effects, but agree not sure that it handles the discrimination between 6 and 1 well. Histograms seem great!
Hmm, I still think the threshold effects are kinda weird, and so NPS shouldn’t be the main measure. (I know you’re just asking for it as supplementary info, and I think we’d maybe both prefer mean + histogram.)
There’s a prima facie case, that’s like: the threshold effects say that you care totally about the 6⁄7 and 8⁄9 boundaries, and not-at-all about the 5⁄6, 7⁄8, 9⁄10 boundaries. That’s weird!
I could imagine a view that’s like “it’s really important to have enthusiastic promoters because they help spread the word about your product” or something, but then why would that view want you to care not-at-all about the 9⁄10 boundary? I imagine 10s are more enthusiastic promoters, and it seems plausible to me that the 9⁄10 differential is the same or greater than the 8⁄9 differential.
And why would it want you to care not-at-all about the 7⁄8 boundary? I imagine 8s could be enthusiastic promoters, more so than 7s.
And similar comments for a view that’s like “it’s really important to avoid having detractors, because they put people off”.
I could also imagine a kinda startup-y view that’s like “it’s really important to get excellent product market fit, which means focusing on getting some people to really love your product, rather than a large group of people to like it”. But on that view, why ignore the 9⁄10 boundary? And why care about detractors?
I also think that maybe all of the above views make more sense when your aim is to predict whether your product will grow virally (not our focus), vs. whether it’s generally high quality/providing something that people want (more our focus). So they might just not carry over well to our case.
Thanks for explaining your view! I don’t really have super strong views here, so don’t want to labour the point, but just thought I’d share my intuition for where I’m coming from. For me it makes sense to have a thresholds at the places because it does actually carve up the buckets of reactions better than the linear scale suggests.
For example, some people feel weird rating something really low and so they “express dislike” by rating it 6⁄10. So to me the lowest scorers and the 6/10ers are actually probably have more similar experiences than their linear score suggests. I claim this is driven by weird habits/something psychological of how people are used to rating things.
I think there’s a similar thing at the 7/8/9 distinction. I think when people think something is “okay” they just rate it 7⁄10. But when someone is actually impressed by something they rate it 9⁄10, which is only 2 points more but actually captures a quite different sentiment. From experience also I’ve noticed some people use 9⁄10 in place of 10⁄10 because they just never give anything 10⁄10 (e.g they understand what it means for something to be 10⁄10 differently to others)
The short of it is that I claim people don’t seem to use the linear scale as an actual linear scale , and so it makes sense to normalise things with the thresholds, and I claim that the thresholds are at the right place mostly just from my (very limited) experience
Thanks for explaining! The guess about how people use the scale seems pretty plausible to me.
EA Global: Reconnect NPS was 20%
For groups support calls, one staff member’s NPS was 83% and another’s was 55%. (They were talking to different user groups, which probably explains some of the discrepancy.)
Thanks for posting this. I find it quite useful to get an overview of how the EA community is being managed and developed.
Thanks for writing and publishing this! Lots of exciting progress. I have some questions, which I’ll separate into different comments:
Is it possible to get links to these documents? I and other student chapter leaders in EA Philippines would be interested to read them, and I think other student group organizers would be interested too. In particular, we think a lot about what the best organizational structure is for uni groups, and what strategies to use for a) goal-setting, b) deciding what projects to run, and c) dividing roles and responsibilities.
Hey Brian. I’d have to ask the individuals who wrote up their docs, but the plan is definitely to eventually share more of these type of group writeups widely. They weren’t written with a broad audience in mind, but I feel like several leaders would be keen to share their writeups more publicly after cleaning them up a bit. I’ll nudge people on this and ask if they’re keen
Got it, thanks! If they could be compiled and put on the EA Hub Resources, such as on this page, that would probably be the best place to compile them?
On Fellowship Data:
If a fellowship starts in Feb but ends in April, does that count toward Q1 data or Q2 data?
Regarding the number of participants for fellowships, I’m not sure how that data is collected, but maybe Marie Buhl reached out directly privately to fellowship organizers to collect this data. Anyway, there’s a good chance that data for EA Philippines’s chapters aren’t accurately included in Q1 data.
For example, 2 of our student chapters in EA Philippines, EA UP Diliman and EA Blue, are both running intro fellowships since Feb and March respectively. They have a combined 67 participants (36 and 31 respectively), though a few have already dropped out or are not set to graduate. I assume both universities are non-focus universities. So if our participant data is part of your Q1 data set, then that means 29% of fellows (67/230) from non-focus universities are from our universities. I think that’s too high, so my assumption would be most or all of their fellows are not yet in the data set.
Related to #2, I think it would be useful to have a public Google Sheet with rows for which groups are running fellowships, and columns for what country and university they’re from, when they’ve started, when they will end, how many participants, and how many graduates (if data exists on that already). I assume most fellowship organizers would be okay with this data being public. I think having this public Google Sheet can let us easily know which groups are and aren’t on the list yet. I think it’s also good for others to know which universities/cities have fellowships—maybe they can recommend friends from those universities/cities to join those fellowships.
I think it would be good to include data for both participants and graduates. Would be interesting to know what the avg. drop-off rate is for these fellowships.
Also, a separate Google Sheet that lists which groups are going to run fellowships or reading groups in the next 1-3 months (and whether they accept people virtually or not) could also help people interested to know what fellowships or reading groups are coming up. But I think this is less important to do than the other things I suggested or mentioned above.
Hi Brian,
Great to hear about your enthusiasm for fellowships!
Q1 data—we count fellowships based on when they start
We collect data on this at the end of each fellowship, so the non-EAVP participant numbers in the report are guesses based on Marie’s conversations with group leaders. For groups Huw was not in touch with on a regular basis, Marie assumed an average of around 12 participants/group, so it’s possible that the number is higher based on EA Philippines numbers (although a number of fellowships are smaller than 12 participants).
Marie is planning to make a spreadsheet like this from next quarter and will post it on the EA Groups Slack.
We’re collecting data on both starting and finishing numbers of participants at the end of each fellowship.
Marie is planning to include future fellowships in the spreadsheet. Adding a tab for other reading groups seems like a good idea.
Feel free to reach out to Marie directly on the EA Groups slack if you’d like to discuss more
On #1 and 2: Got it! I guess CEA should be more cautious (i.e. by putting significant caveats or not reporting the data yet) then about reporting participant data for non-EA Virtual Programs participants, since you collect data at the end of fellowships, and the data are just guesses before then.
On #3-5: Great!
On #1 and #2: In our report in footnote 5 where we reported this data we said: “As some students leave fellowships before finishing, and fellowships are run independently through groups, our estimates of the number of fellowships in Q4 and participants across Q1 / Q4 are somewhat uncertain.”
I do think the benefits of reporting estimates are more valuable than only reporting precise information, but we do try to add additional detail about where the uncertainty comes from. I’ll keep this comment in mind when we do our Q2 report as well.
Yeah estimates are probably better than nothing. Maybe making the caveat/uncertainties about the data more easily seen, i.e. as asterisks beside some numbers on the fellowship data table, rather than as a footnote might help. But yeah it’s a minor thing!
Thanks for this, really interesting! I am surprised that the total attendance of fellowships isn’t even higher—do you have a feel for whether they’re typically constrained by mentors or signups? In my experience helping run fellowships, many people are surprisingly interested but haven’t heard about EA, have you looked at ways to reach more of these people?
For the large EA Virtual Program round, at first we were worried about having enough facilitators. But then we actually had quite a number of volunteer facilitators (over 100!) so then we focused on getting more participants. In the end we ended up having participant demand that matched available facilitators. As we mentioned, we’re working to build more operations capacity for the virtual programs version of our fellowship. Once we do this, we hope to be able to offer them on a more consistent basis so more people can sign up.