Would it be easy to send a link to the comment about the $7.7M program you mentioned?
Howie_Lempel
I think the tone of this post projected a confidence around your empirical views that initially led me to glaze over the actual data you present. But on a second read I noticed that the NPS at EAG was 5 points lower than the threshold you give for “excellent” (and far short of what you say is typical of Apple). This felt a bit jarring in light of the takeaways that “EAs really love EA”—so much so that being more welcoming isn’t a very pressing problem.
Nothing in this post is actually inconsistent with an NPS that’s short of excellent. I don’t even really have an opinion about whether NPS is a useful measure. But it does make me feel like the “potential implications” you list are things you already believed. Did the data affect your views much one way or another? Do you have a sense for the threshold at which you would have instead written a post that said “even though the NPS at EAG was only good, not great, I still believe that making EA more welcoming is not one of the most pressing problems facing EA?”
Similarly, I don’t really see how either of your conclusions could really be a “potential implication” of the fact that more than half of EAG applications were due to referrals. To me this data seems equally consistent with the exact opposite conclusions—only people who already know an EA well end up applying to EAG, which is evidence that EAG is unwelcoming and too cultish. Alternatively, if very few applications had come through referrals, say 5%, you could just as easily argue that this was evidence against the need to be more welcoming (tons of people who don’t know us feel welcome to apply) and against EA being too cultish (it’s not a closed system at all)!
Obviously it’s ok for you to have views that aren’t driven by this fairly limited data. But if the data isn’t really one of the stronger factors informing your views, I think I’d probably prefer to see them presented in separate posts. I think there’s otherwise a risk of building a habit of using “data as soldiers” (https://wiki.lesswrong.com/wiki/Arguments_as_soldiers) and losing opportunities to update.
Haven’t had a chance to listen to your talk which might clear this up but while “don’t select on the correlates” does technically capture Rohin’s point, it doesn’t really resonate with me as making the point in a more crisp way, especially when contrasted with being welcoming.
I think one of the more insidious features of the type of phenomenon Rohin’s talking about is that, from the inside, it doesn’t FEEL like you’re making a selection at all. Indeed, apparently EA Berkeley’s intentional/explicit attempts at selection were basically random—selecting for almost nothing other than altruism. But, despite the lack of explicit selection, there was still a selection effect.
Asking people to do selection differently feels pretty far removed from the actual actions (if any) we might want someone to take if a lot of those people don’t by default feel like they’re doing selection at all.
I’d also be interested in examples of this.
I don’t have a take on these specific suggestions but I wanted to mention that I really like your effort to think about how the EA community can be relevant to people who aren’t actively thinking about cause selection at a given time. I think this is going to be incredibly important if people at different stages in their careers are going to affiliate with the community.
I agree that thinking explicitly about the goals of the conference would be good. Fwiw, though, my instinct is that trying to quantify it into an EV estimate would be a bit of a distraction from the main benefits.
Huh. I don’t know anything at all about design or branding but I thought the EAG website made a step toward minimalism this year. The about page seems pretty minimalist at least. http://eaglobal.org/about
If it’s easy to describe, I’d be curious about what aspects of the page were not minimalist. Is it primarily the pictures?
A lot of the examples you cite as potentially superior alternatives for spending $3,900 have a lot of opportunity cost because they take much more time than a weekend at CFAR. Once that’s accounted for, a lot of them are much more costly than CFAR for a lot of people.
Thanks for being so open to feedback and non-defensive on this and thanks especially for updating the body of the post. I think there’s a big problem where people change their minds due to feedback but it never gets propagated out to readers b/c it’s buried in a comment section. With preliminaries out of the way. . .
This wasn’t the tone I was going for. On my reading of the post, it’s pretty hedge-y, with the exception of the title. Can you help me out by pointing out some ways that I seem overconfident in the empirical view?
Looking back at it, I do think the headline really primed me here. But I think the other things were: “the NPS score coupled with the referral evidence suggests that as a brand EA has a very dedicated fanbase that is willing to promote the brand.” I see how this could feel like a nitpick because you said “suggests” and not “proves” but I think somebody reading the post quickly, glossing over the actual numbers, and trusting your description of their implications (which I think is pretty common/reasonable) would take this to mean that the NPS score unambiguously is evidence in this direction and the uncertainty is due to the fact that it’s just one form of evidence. I’m not sure exactly what I would’ve said instead but I would’ve said something more moderate when describing data that’s less than excellent on it’s own terms. Taking out the phrase “very dedicated” would’ve made a pretty big difference, I think. I think it would’ve made a big difference if there was some explicit discussion of the fact that the NPS score was good-but-not-great and could reasonably have been better. Putting the data in there for comparison definitely helps but if I’m reading something about metrics I’ve never seen before, I kind of expect the writer to do the work for me and tell me how to interpret the comparison. If there’s some data and then the author says it suggests “very dedicated fanbase” I’m likely to assume that the score EA got is relatively close to the score Apple got. If it’s not, that seems like an important enough fact to grapple with instead of just present. *The other places you implicitly described what the data mean are “highly loyal community willing to promote EA” and “high brand loyalty.” Combining all of this, I think the post really reads like EA killed it on the NPS front.
I don’t see why this is a reasonable conclusion to reach. I think the welcoming/unwelcoming distinction is a claim about the experience of being in the community and interacting with EAs. Since new people haven’t had a chance to interact with EAs, it would be surprising if they found the community unwelcoming. It could be that the EA brand prevents people from wanting to join the community in the first place. That seems like a hypothesis worth testing, but it doesn’t seem to me like a claim about how welcoming the community is. Fair points but I’m not convinced.
I’d guess that there’s a lot of people who have had enough contact with the EA community to have been affected by its welcomingness or lack thereof but who wouldn’t have counted as applying to EAG b/c of someone else in EA according to your metric. People with relatively weak connections to EA are likely to be most affected by welcomingness so it seems possible that the relevant margin here is whether people who fall into the non-referral group feel welcome to apply. I think a major mechanism through which welcomingness has effects is welcomingness → experience of people interacting with the community → EA’s reputation/brand among people outside the community. So I’d actually expect welcomingness to have a big effect on whether EA has a brand that gets people to want to join the community in the first place. For a fairly big, somewhat outward facing event like EAG where the pool of potential non-EA applicants is so large compared to the pool of potential EA applicants, it seems possible that this mechanism through which welcomingness decreases proportion of applicants coming through referrals could dominate your proposed mechanism through which welcomingness increases the proportion of applicants coming through referrals. JTBC, I don’t have a net take on the above. My main point is just that the direction is ambiguous so I don’t think the data says much about welcomingness.
I don’t have a great answer to this and think it’s pretty tough to capture with data. Given that, I’d probably go with something like Ajeya’s suggestion. Just asking people whether they felt welcome, whether they had any experiences that made them feel unwelcome, whether they plan to continue to engage the community and why, whether the community could have done things to make them feel more welcomed, etc. seems like the best bet.
You could also ask these questions of EAG attendees who had relatively little contact with the community before attending.
Thanks. That helps. I think I agree with you.
I didn’t take your post as a criticism of the website but thanks for clarifying!
Sorry—I was unclear. I think it’d be a distraction to the EAG organizers themselves. Trying to come up with an explicit EV can increase the amount of work it takes to put something like this together by a lot and I think most of the benefits would come from just thinking hard about priorities.
Really appreciate the response here, Kerry. Adding some extra feedback for calibration. Apologies in advance that I’m realizing I’m struggling to balance the strength of my opinion with my knowledge that this was well-intended. Just to be clear, my views only here, not my employer’s.
I didn’t end up nominating anybody because I’d rather reach out to people myself. The “via EAG” thing makes me really relieved that I made this choice and will prevent me from nominating people in the future. I’m actually a bit surprised at the strength of my reaction but this would’ve felt like a major violation to me. I really dislike the idea of feeling accountable for words that I didn’t endorse. Just for example, I could plausibly have invited work contacts who I’m not super close with and whom I would be very sensitive to being perceived as spamming.
After your explanation the practice still does seem (very) deceptive to me. At the very least, I’d expect a lot of people to click on the email because they think it’s coming from me and then to realize that it came from someone else. If I received this email, I’m sure I’d eventually figure out it wasn’t from the person in the “from” line but I’d be confused for a bit and might assume that they approved it even if they didn’t write it.
Moreover, if I wanted to not only nominate someone but also send them an email advising them to attend, I could easily do so. Some people may even have done that so their nominees would have felt like they received multiple unsolicited pings from the same person. I know it would have a lower yield but I feel like EAG should have emailed [Firstname at Lastname] and asked them to ping their nominee instead of spoofing their identity in the “from” line and taking this decision out of their hands.
I’d acknowledge that most of the other practices on this thread seem like basically standard marketing techniques. They seem off-putting according to my personal taste and I’d guess they’re counter-productive but because they’re so standard it also seems likely that I’m just being biased against them because I find marketing distasteful. I want to make clear that I’d put the “via EAG” thing in another category—substantially worse than I’d expect from a typical sales email.
Lower priority stuff:
Deadlines I don’t have a problem with rolling deadlines if it’s clear that’s what they are. I didn’t pay a ton of attention to this so I don’t have a strong take. It did seem like discounts went up as it got closer to the actual date and I think that did feel a bit like taking advantage of the people who helped out by signing up early.
Looking through the attendee database This language feels off-putting and slightly deceptive to me. As Kit says, it’s intended to make it sound like you were thinking of that specific person when it wasn’t the case. Unlike the “via EAG,” I think this practice is basically standard but I still really dislike it. Kit’s comment that “my vanity fooled me for a solid few seconds, by the way!” strikes me as a really good reason to discontinue this practice. I think it’s a bad experience and kind of embarrassing to feel like you’re getting a personal compliment and then realize it’s a form email.
I feel similarly about some other language Kit mentioned.
Huh! Does economics at McGill have more women than men?
Agree with most of what you said here. But I had a different interpretation of the facts with respect to the “via EAG” issue than you did.
Your impression is that:
if I recommended (e.g.) Kit to EAG and he doesn’t reply a couple of times, he gets an email with ‘greetings from Greg’ or similar in the subject heading
My impression is that “Greg Lewis (via EAG)” would appear in the “from” line. (In the way that email clients often replace the sender’s email address with their name.
If I understand correctly then the practice strikes me as much more likely to deceive a recipient.
@Kerry_Vaughan:
It’d be helpful if you could clear this up. If I was confused and you actually just put “Greetings from FirstName LastName” in the subject line or some such, I’d have a substantially weaker reaction.
Thanks. That’s what I thought.
[Speaking just for myself here, not for my employer, the Open Philanthropy Project, which is housed at GiveWell]
UPDATED 8/27/16. I added the name of my employer to the top of the post because Vipul told me offline that he thinks “my financial and institutional ties . . . could be construed as creating a conflict of interest” in this post.
One of the things that makes this decision so hard for anybody considering ETG to fund relatively small projects that staffed foundations might miss is that projects that receive funding get way more visibility than projects that do not.
This makes it incredibly hard to figure out what the right margin is and how many projects are at that margin (particularly important when you know lots of others are making the same decision at the same time). Unless they do an incredible amount of research, a potential ETGer can mostly see examples of projects they support that that WERE funded and then speculate on whether they were close to not being funded. You can also look at projects that are currently fundraising but, again, it’s hard to tell in advance how many of them will actually struggle to get support
If I were CEA/80k and wanted to make progress on this question, I think the first project I’d try would be to create a list of people willing to disclose projects that they tried and failed to fundraise for over the last year or two. Ideally, they’d also give some sense of their own opportunity cost—what they ended up doing instead (this is especially important if it included projects pitched by medium/large EA orgs where staff that didn’t get funding for one thing may have ended up just working on a different priority which is pretty different from somebody who wanted to quit their job to start something and couldn’t).
There are all kinds of reasons this would be imperfect. It wouldn’t be a complete survey. It wouldn’t account for the potential growth of the community. It wouldn’t capture all of the effects of a bigger funding pool—e.g. projects happening faster, less time wasted fundraising, people feeling more confident pitching projects in the first place because their odds are higher. But I think it’d be a lower bound with fairly high information content. If I were 80k and advising lots of people on whether to ETG at the same time, I’d like to see something like this.
A survey of EAs would probably identify a bunch of projects and CEA could also ask ETGers and people who see lots of pitches (e.g. EAV, Carl, Nick) if they can ask rejected people whether they’d be willing to disclose. Presumably 80k also knows of advisees who considered starting an organization but couldn’t get funded. It’s a bit embarrassing to admit failure but it’d be worth it for some people as it might also give them another shot at funding. Although whoever carried this out would have to make sure it stayed a list of projects that failed to fundraise or else it’ll just become a big pitch bank.
I’m really glad you’re writing about this. I think this is an important criticism of the way the EA movement and a lot of individuals within it (myself very much included) often come off. I think I’d suggest a different focus for what to say in these situations (although it’s compatible with many of your suggestions).
In particular, most of these suggestions seem fairly focused on stating/advocating for your own position while accurately expressing your uncertainty. Instead, when you don’t have good evidence that your view is correct, I think the most important thing to do is to focus on asking questions. I think this is the most important bit of humility. It’s also likely to lead to more learning. And, if you really don’t know much about their alternative (or have good evidence that yours is better) you’re not likely to convince anybody anyway.
I think a common mistake I make is to express humility by continuing to advocate for my position while making my uncertainty more explicit.(1) People often don’t read this as humility, though, because I’m not acting as if I believe they might have evidence that they’re right and I don’t sound curious about their alternative and about whether I’m wrong.
When I’m emotionally invested in a topic (particularly value-laden issues like EA) I often struggle to remember to be in learning-mode instead of persuasion-mode—even when I’m genuinely curious about what the other person has to say. FWIW, one strategy I’ve personally used in this situation is to try to mentally keep track of the amount of time I’m spending explaining my position versus listening to theirs. If i don’t have good evidence about their position, hearing their take is usually more interesting (even if my system 1 sometimes forgets this).
A caveat
One failure mode with my approach, though, is that there can be a fine line between trying to learn about someone’s position (which comes off as humble) and interrogating them (which does not). When I’m trying to make sure I’m not interrogating someone, the question I usually ask myself is:
“Did I ask this question because I think they will have a good answer or because I think they will not have a good answer?”
(1)I had to learn the hard way that, at least for me, this doesn’t actually come off as less confident. Instead it comes off as more confident AND better calibrated. Which is an improvement but doesn’t lend itself to coming off as humble or to making others feel comfortable expressing disagreement.