EA Facebook New Member Report
Summary: Data about where people joining the EA Facebook group first heard of EA.
EA Facebook group moderators Claire Zabel and I, with some help from Julia Wise, have been sending a welcome message to every new member when we add them. By doing so we partly aim to set a good first impression, make people feel welcome, and provide a point of contact for questions.
We also ask them to tell us where they first heard about EA. By doing this, we gather data about where new EAs are formed. Hopefully this data will be helpful for future marketing efforts. As joining the group requires moderator approval, this means we reached substantially every member who joined over the past 31 days. Obviously we’re only sampling a subset of EAs—those that join the facebook group—but we are sending the message to ~100% of that subset, and no one else. This compares with the big EA census, which sampled from a much wider group, but it was less clear how representative the sample was.
The data
Between 2015/06/22 and 07⁄22, 375 people joined the group, bringing us to 6478 members.
Of the ~371 people we messaged*, 216 people responded. This is a 58% response rate. We then tried to fit their response into a broad category like ‘Facebook’ or ‘LessWrong’. Here is the data in chart and table form:
28% Friend
16% Facebook
14% Other
12% Peter Singer
12% LessWrong/CFAR/Eliezer/HPMOR/SSC/MIRI
10% Media Article
4% 80k
4% GiveWell
3% Colleague
3% Philosophy
3% EAG
3% Animal Rights
2% Local Grouo
2% NA
2% Will
1% Akilnathan logeswaran
1% Family
1% Christianity
1% GWWC
Some notes on the data
The categories are not mutually exclusive.
‘Friend’ sometimes referred to people the new member knew in person, and sometimes to an online friend—often it was unclear. Sometimes they just gave a name, and if we didn’t recognise the name I often assumed they were a friend.
‘Facebook’ tends to refer to the ‘recommended groups’ feature on facebook.
‘Other’ is very broad.
Peter Singer does well, mainly from his TED talk and his book.
I grouped together LessWrong, CFAR, Eliezer, HPMOR, SlateStarCodex and MIRI.
I think Media Articles were often discussing the EA Global Event, but people were sometimes ambiguous, or gave a result that would have required too much investigation—e.g. ‘NYT article’, ‘magazine article’.
80k includes blog posts and talks.
GiveWell includes Holden
EAG includes Tyler Altman
Local Groups includes EA groups, LW groups, Philosophy groups etc.
We also have a constant problem with spam in the group. A large fraction (Jacy once estimated 80%, but I think more like 30% of the new joiners) of the accounts who attempt to join are fake accounts. In total (since the group began many years ago, and before the current moderators) 403 facebook accounts have been blocked, virtually all of which for spam. This means we must be sceptical of the 6478 members number, as many of these are probably fake accounts using the EA page to make themselves appear more credible. The new members appear to be more legitimate, but we tend to approve new members if uncertain, so there are probably fake accounts among the 375. Presumably these did not respond to our greeting.
How do these results compare to the EA census?
Some discrepancies are to be expected, as an artifact of the data collection technique. For example, we had many more people naming ‘Facebook’. The census had LessWrong as the number one source, which probably reflected the prominent link to the survey on LessWrong. However, LessWrong is still a major source in our data, suggesting that the strong result in the census was not just an artifact of disproportionate sampling.
Notably, ‘Peter Singer’ was a pretty major source in our data—much higher than in the census data. Conversely, ‘GWWC’ was a much more major source in the census than in our data. Perhaps indicating the recent bout of attention around EA global, ‘Media Article’ does well in our data, but does not appear in the survey data.
Friendship proves its worth in both data sets.
Should we change what we’re doing?
Sending these messages and compiling the answers is somewhat time consuming for Claire and I.
Is this data worth gathering? Sending the messages also has other benefits, as we answer people’s questions and make them welcome. But we could save time recording the data.
Would it be better to send a link to an online survey, with standardised response options?
What other data would be most valuable? Bear in mind that we don’t want to overwhelm people!
What else can we do better?
----
Thanks to Claire to reading a draft of this post. Any errors are, of course, my own.
---
* A very small number of profiles do not allow messages from non-friends
Thanks for writing this up! It’s very useful to be able to compare this to census data. Did you use the same/similar message for everyone? If so, I’d be interested to see what it was. This sort of thing would also be useful to a/b test to refine it. There is also the option to add people manually, bypassing the need for admin approval; did you contact these people too?
We used:
“Hey, welcome to the Effective Altruism facebook group! If you have a moment, would you mind telling us where you first heard about EA?
Thanks!
Claire
(moderator)”
We are considering a/b testing some new questions, and would love suggestions on different phrasing.
And when someone in the group adds a new member, we still have to approve them. We messaged them as well.
How about using different questions from the EA Census? There was already a big community discussion on what suggestions to include on that, on this forum I think.
Interesting post. This data seems helpful, but it’s probably not worth gathering constantly—maybe on an annual basis or something. Of course, then there are representativeness issues. I would think an online survey would be less effective, but there might be a way to automate this using some software. I don’t think Hootsuite can do it but some other app for automating Facebook posts and messages could help.
Thank you for sharing! I think a/b testing this seems like a really good idea. Even just testing the way you are phrasing the question opposed to testing other questions. A static online survey would definitely cut down on the time investment since it will collect all the data for you, however it will definitely cut into your response rate (more clicks = more work).
It seems like continuing to gather this information over the course of all the EA Global meetings and the launch of Will’s book would be valuable due to the likelihood of continued rapid growth. Past that it would be more useful to focus on using the data opposed to collecting it.
Right now across both surveys it looks like LW and word of mouth are the best recruiting tools. Continuing to enacting marketing strategies across those two platforms seems like the best course of action. Meaning we should probably be encouraging new members to tell their friends and invite them to meetings. It also seems like a great idea to keep messaging some people to reinforce the welcoming feeling and also because people that have been referred by word of mouth will likely appreciate and need a one-on-one interaction to stay interested or motivated.
It’d be interesting to test that—one factor which will cut the other way is that some people are more comfortable answering an online survey (often selecting preset answer options) rather than getting into a discussion with another human being.
Tying this into the EA census sounds like a good idea as it’d provide a helpful additional subsample, at least for some subset of questions.
Thanks for posting this. I compared this to the data we got from a random sample of the EA Facebook group myself as well. I don’t know whether your comparisons above are to our overall results or just the random sample of Facebook we did?
First points re. comparison:
Our data is slightly different from each others because you allowed people to select more than 1 place as “where you first heard about EA” whereas we allowed only 1. (We allowed more than 1 option for our “which helped get you more involved in EA question) This might skew things a bit. -It also meant that I had to adjust your numbers a bit to be able to compare %s. -Comparing our numbers is also quite difficult to do because our categories don’t line up. For example, nearly 40% of your responses were “Other” which didn’t come up at all in ours (mostly options with quite small numbers too). -To make our data a bit more easily comparable I shoved a few categories together (for example, I included “colleague” and “family” in “friend”) -Our results agree on ACE/AR being very low numbers
Comments on the actual data: I basically agree that there were some significant differences but overall not enormous divergence.
LessWrong, as you note, is not the biggest point of divergence. (~10% vs ~20%) -GWWC is probably the biggest difference: 14% for us and basically a complete absence for you (<1%). I don’t think that can be explained by any simple sampling bias (of the kind people posited for LW) -Relatedly we had double the 80K responses you did. -You had twice as many Singer responses; but our category was (TED) Singer, so I think some of our general Singer responses may have ended up in TLYCS (or Other, maybe), so our Singer+TLYC scores are pretty similar 8% v 9%.
What to make of the GWWC/CEA wipeout? (For us CEA and LW had basically equal influence, for you the ratio was 2:1 in favour of LW). I would guess the most likely explanation is timing. There was only year between our survey’s (results being published), but we were randomly sampling members of the FB group, whereas you were only asking new members. So we’d have sampled a lot of people who’re members of the group and have been EAs for a few years, whereas you are sampling solely new people. People involved in EA from close to the beginning will plausibly be much more likely to have heard of it from GWWC. New people, it seems, much less so. (Even if you include EAG and local groups and Will and Tyler personally all in CEA- which would be unreasonable anyway- the numbers don’t jump that much). So it seems plausible that CEA is much less of an influence, as a proportion, than it was in the early years. It will be interesting to see if this trend continues and is reflected in our new survey.
Let me know if you’re expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.