My intuition, having seen proposals from people both inside and outside of CEA, is that this collation will almost certainly take longer than a week or two:
A higher standard than “broadly acceptable” seems important, since whatever posts are chosen will be seen as having CEA’s endorsement (assuming CEA is the one doing the collation). A few critics can contribute a lot of negative public feedback, and even a single unfortunate line in a curated post may cause problems later.
I also think there’s a lot of value to publishing a really good collection the first time around:
Making frequent revisions to a “curated” collection of posts makes them look a lot less curated, and removes comments from the public eye that authors may have worked on assuming they’d stick around.
It’s also not great if Post A is chosen for curation despite Post B being a much stronger take on the same subject; assembling a collection of posts that are roughly the best posts on their respective topics takes a lot of experience with EA content and consultation with other experienced people (no one has read everything, and even people who’ve read almost everything may differ in which pieces they consider “best”).
That said, the task is doable, and I’m consulting with other CEA staff who work on the Forum to draft a top-level answer about our plans for this feature.
I like that wording, and don’t have any changes to suggest.
Here’s the post I believe Yannick was thinking of. (Find the phrase “core series of posts”.)
This is still something we plan to do in the future; I’m consulting with other CEA staff who work on the Forum to draft a top-level answer to Richard’s question.
I share Habryka’s concern for the complexity of the project; each step clearly has a useful purpose, but it’s still the case that adding more steps to a process will tend to make it harder to finish that process in a reasonable amount of time. I think this system could work, but I also like the idea of running a quick, informal test of a simpler system to see what happens.
Habryka, if you create the “discussion thread” you’ve referenced here, I will commit to leaving at least one comment on every project idea; this seems like a really good way to test the capabilities of the Forum as a place where projects can be evaluated.
(It would be nice if participants shared a Google Doc or something similar for each of their ideas, since leaving in-line comments is much better than writing a long comment with many different points, but I’m not sure about the best way to turn “comments on a doc” into something that’s also visible on the Forum.)
Good post! I share Greg’s doubts about the particular question of salaries (and think that lowering them would have several bad consequences), but I think you’ve summed up most of the major things that people get, or hope to get, from jobs at EA organizations.
Other than your reasons and “money”, I’d include “training”; if you want to learn to do Open Phil-style research, working at Open Phil is the most reliable way to do this.
When I started at GiveWell, I was surprised at how people in these circles treated me when they found out I was working there, even though I was an entry-level employee.
Are there any examples of this that stand out to you? I can certainly believe that it happened, but I’m having trouble picturing what it might look like.
(Since I began working at CEA five months ago, I haven’t noticed any difference in the way my interactions with people in EA have gone, save for cases where the interaction was directly related to my job. But perhaps there are effects for me, too, and I just haven’t spotted them yet.)
Somewhat mixed in with the above points, I think there’s a lot of value to be had from feeling like a member of a tribe, especially a tribe that you think is awesome. I think working at a professional EA organization is the closest thing there is to a royal road to tribal membership in the EA community.
I think you’re right that EA work is a quick way to feel like part of the tribe, and that’s something I’d like to change.
So I’ll repeat what I’ve said in the comments of other posts: If you believe in the principles of EA, and are taking action on them in some way (work, research, donations, advocacy, or taking steps to do any of those things in the future), I consider you a member of the EA “tribe”.
I can’t speak for any other person in EA, but from what I’ve heard in conversations with people at many different organizations, I think that something like my view is fairly common.
That version does sound better. One more suggested version:
Thank you for taking the time to share what you’ve done. Since we also asked about your future plans, could we follow up with one more short survey a year from now, to see what happened?
If that’s alright with you, please enter your email address below—it will not be shared with anyone, or used for any other purpose.
I’m hoping this feels a bit less high-pressure than “what you may still do”, but you could also remove “to see what happened” to help with that.
I agree that this doesn’t run into the first two problems, though it could make giving anonymous feedback even more tempting. More practically, it seems like it would be pretty annoying to code, and provide less value than similarly tech-intensive features that are being worked on now. If I hear a lot of other calls for an “anonymous feedback” option, I may consider it more seriously, but in the meantime, I’ll keep pushing for open, honest criticism.
I haven’t read every comment on every post, but so far, I’ve seen barely any posts or comments on the new version of the Forum where someone was criticized and reacted very negatively. Mostly, reactions were like this post (asking for more details) or showed someone updating their views/adding detail and nuance to their arguments.
When you are events focused, you are competing with many things—family, friends, hobbies, Netflix, cinema, etc. If your focus is more on helping people doing good, it’s no longer about having people turn up to an event, it’s about keeping people up to date with relevant info that is helpful for them. When there is a relevant opportunity for them to do something in person, they might be more inclined to do so.
I really like this point, and the related Kelsey Piper quote. EA, like any social movement, is likely to grow and succeed largely based on how helpful it is for its members. Having a “what can I do for you?” mindset has been really useful to me in my time running a couple of different EA groups (and working at CEA).
When you say that Meetup.com “gave a worse impression of effective altruism”, do you mean that it actually seemed to have negative value, or just that it was worse than Facebook because it didn’t give you an easy way to contact people soon after they’d joined? If the former, can you talk about any specific negative effects you noticed? (One of the groups I’m affiliated with is still using Meetup, so I’m quite curious about this.)
Fantastic post, Jeremy! I’m a bit biased, since I had the chance to see earlier drafts, but I really like the generous spirit of this initiative, and it seems like a low-risk, high-potential way to grow the community. It’s very kind of you to offer funding to others who want to try their own giveaways.
In fact, I might just try this myself come Giving Season; I’ve set a reminder in my calendar to think about it on November 15th. Thanks for the idea.
Regarding the survey: Consider changing the wording on question #9:
The bulk of the impact from introducing people to Effective Altruism probably happens over the long term. If you think you might make future changes, or you generally agree with the principles of the book, we’d love to be able to check in with you in a year, to see how things are going.
I’d remove the section in bold. If people are really interested in EA, they’ll hopefully give you contact information either way; if they’re on the fence, they might feel a bit objectified being referred to as sources of impact, or guilty about donating once and planning not to do so in the future (I can imagine giving $100 to GiveWell, then seeing the survey and losing my warm glow because I haven’t had “the bulk of my impact”).
This is a highly speculative suggestion, though, and I don’t think it makes a big difference either way.
I don’t love the idea (suggested by one comment here) of having separate anonymous feedback, for these reasons:
Public feedback allows people to upvote comments if they agree (very efficient for checking on how popular a view is)
Public feedback makes it easier for the author to respond
Most importantly, public feedback generally strengthens our norm of “it’s okay to criticize and to be criticized, because no one is perfect and we’re all working together to improve our ideas”.
Of course, these factors have to be balanced against the likelihood that anonymous feedback mechanisms will allow for more and more honest feedback, which is a considerable upside. But I’d hope that the EA community, of all groups, can find a way to thrive under a norm of transparent feedback.
It looks like Jan’s comment on your other post was heavily upvoted, indicating general agreement with his concerns, but I’d hope that people with other concerns would have written about them.
I’ve recommended before that people try to avoid downvoting without either explaining their reasoning or upvoting a response that matched their views. I’ve been happy to see how common this is, though there’s still room for improvement.
Please keep posting and sharing your ideas—one of the Forum’s core purposes is “helping new people with ideas get feedback”, and no one entered the EA community with only good ideas to share. (As far as “initial experience with forum use” goes, you’re still doing a lot better than GiveWell’s Holden Karnofsky circa 2007.)
I agree with this point. Even in the startup world, where due diligence is common, most projects fail after spending a lot of money, achieving very little impact in the process.
In the case of EA projects, even a project that doesn’t have negative value can still lead to a lot of “waste”: There’s a project team that spent time working on something that failed (though perhaps they got useful experience) and one or more donors who didn’t get results.
Hits-based giving (which focuses on big successes even at the cost of some failure) is a useful approach, but in order for that to work, you do need a project that can at least plausibly be a hit, and no idea is strong enough to create that level of credibility by itself. Someone needs to get to know the team’s background and skills, understand their goals, and consider the reasons that they might not reach those goals.
Side note: I hope that anyone who independently funds an EA project considers writing a post about their decision, as Adam Gleave did after winning the 2017 donor lottery.
I like the use of the “non-X” concept (which is new to me) to explore post-scarcity, a topic that has been talked about a lot within EA. Something like a universal basic income has a lot of popular support among members of this community, and there’s a lot of writing on “how good the world could be, if we do things right and don’t experience a catastrophe from which we can’t recover”.
Some resources you might like, if you haven’t seen them yet:
Eliezer Yudkowsky’s “Fun Theory Sequence”
The Future of Life Institute’s “Planning for Existential Hope”
I agree with Denise’s concerns about the time involved in following these suggestions, but I also think there are good lessons worth pointing out here. Some notes:
Consider that “EA organization” refers to a very small group of nonprofits, which collectively hire… 50 people each year? Remove GiveWell and the Open Philanthropy Project (which have their own detailed guidelines on what they look for in applicants), and I’d guess that the number drops by half or more. Many of the positions recommended by 80,000 Hours require deep expertise in a particular topic; research and volunteering can help, but questions of general EA knowledge/experience aren’t likely to be as important. If you want to work on AI alignment, focus on reading CHAI’s bibliography rather than, say, the EA Forum.
As far as volunteering, research, and other projects go, quality > quantity. Years of reading casually about EA and posting on social media don’t hurt, but these factors aren’t nearly as important as a work reference who raves about your skills as a volunteer, or a Forum post that makes a strong contribution to the area you want to work on.
If you want an operations job and you wrote a blog post about the comparison of top online operational resource courses, then you are a person EA organisations are interested in talking to.
This only holds true if the post was useful, helping EA orgs solve a problem they had or getting strong positive feedback from people who used it to select a course. There’s a lot of writing in the EA blogosphere; much of it is great, but some posts just never find an audience. Again, quality > quantity; better to spend a lot of time figuring out which post idea is likely to have the most impact, then working on the best version you can produce, than to publish a lot of posts you didn’t have the time to think about as carefully.
(This doesn’t mean that the Forum itself doesn’t encourage unpolished work—we’re happy to see your ideas! -- but that the writing most likely to demonstrate your practical skills is writing that you’ve polished.)
As an aside: I’m not a career coach by any means, but I’ve worked in EA operations and EA content, and I’ve talked to a lot of different organizations about what they look for in applicants. If you have particular questions about applying to an org in/adjacent to EA, you’re welcome to comment here or email me (though it’s possible that my advice will consist of “ask these questions to the organization” or “read this article they wrote about what they want”).
I work for CEA, but these views are my own.
Slack’s not perfect, but here are some features I like:
Emotes let you “respond” to a message in less than a second with zero typing. At CEA, we have an “eyes” emote that means “I’ve seen this message”, which saves me 30 seconds over sending a “thanks for sending this, I’ve read it” email. We have lots of other emotes that stand in for other kinds of quick messages. I send a lot less email at CEA than I did in my most recent corporate job, at a tech firm with pretty standard messaging practices.
Channels act as a proactive sorting system. CEA has an “important” channel for time-sensitive things that everyone should read and a “general” channel for things that everyone should read, but that aren’t time-sensitive. If all the messages on those channels were emails, I’d wind up reading them all as they came in, but in Slack I can ignore most of them until I hit the time in my day when I want to catch up on messages, without spending any energy on sorting.
Slack also has a feature that lets you set “statuses” in the same way the HBR article discusses (e.g. “working on important thing, available after 4:00 pm”), which takes less time than writing an auto-reply and also doesn’t add dozens of automated emails to other people’s inboxes when they try contacting you.
1. I’d really recommend finding a different phrase than “low levels of emotional control”, which is both more insulting than seems ideal for conversations in an EA context and too vague to be a useful descriptor. (There are dozens of ways that “controlling one’s emotions” might be important within EA, and almost no one is “high” or “low” for all of them.)
2. “Less welcoming for everyone else” is too broad. Accommodating people who prefer some topics not be brought up certainly makes EA less welcoming for some people: Competing access needs are real, and a lot of people aren’t as comfortable with discussions where emotions aren’t as controlled, or where topics are somewhat limited.
But having “high emotional control” (again, I’d prefer a different term) doesn’t necessarily mean feeling unwelcome in discussions with people who are ideological or “less controlled” in some contexts.
One of the features I like most in a community is “people try to handle social interaction in a way that has the best average result for everyone”.
I’d consider “we figure out true things” to be the most important factor we should optimize for, and our discussions should aim for “figuring stuff out”. But that’s not the only important result; another factor is “we all get along and treat each other well”, because there’s value in EA being a well-functioning community of people who are happy to be around each other. If having a topic consistently come up in conversation is draining and isolating to some members of the community, I think it’s reasonable that we have a higher bar for that topic.
This doesn’t mean abandoning global poverty because people think it seems colonialist; it might mean deciding that someone’s Mormon manifesto doesn’t pass the bar for “deserves careful, point-by-point discussion”. That isn’t very inclusive to the manifesto’s author, but it seems very likely to increase EA’s overall inclusiveness.
I work for CEA, but the following views are my own. I don’t have any plans to change Forum policy around which topics are permitted, discouraged, etc. This response is just my attempt to think through some considerations other EAs might want to make around this topic.
While we all have topics on which our emotions get the better of us, those who leave are likely to be overcome to a greater degree and on a wider variety of topics. This means that they will be less likely to be able to contribute productively by providing reasoned analysis. But further than this, they are more likely to contribute negatively by being dismissive, producing biased analysis or engaging in personal attacks.
I don’t really care how likely someone is to be “overcome” by their emotions during an EA discussion, aside from the way in which this makes them feel (I want people in EA, like people everywhere, to flourish).
Being “overcome” and being able to reason productively seem almost orthogonal in my experience; some of the most productive people I’ve met in EA (and some of the nicest!) tend to have unusually strong emotional reactions to certain topics. There are quite a few EA blogs that alternate between “this thing made me very angry/sad” and “here’s an incredibly sophisticated argument for doing X”. There’s some validity to trying to increase the net percentage of conversation that isn’t too emotionally inflected, but my preference would be to accommodate as many productive/devoted people as we can until it begins to trade off with discussion quality. I’ve seen no evidence that we’re hitting this trade-off to an extent that demands we become less accommodating.
(And of course, biased analysis and personal attacks can be handled when they arise, without our needing to worry about being too inclusive of people who are “more likely” to contribute those things.)
The people who leave are likely to be more ideological. This is generally an association between being more radical and more ideological, even though there are also people who are radical without being ideological. People who are more ideological are less able to update in the face of new evidence and are also less likely to be able to provide the kind of reasoned analysis that would cause other EAs to update more towards their views.
See the previous point. I don’t mind having ideological people in EA if they share the community’s core values. If their commitment to an ideology leads them to stop upholding those values, we can respond to that separately. If they can provide reasoned analysis on Subject A while remaining incorrigibly biased on Subject B, I’ll gladly update on the former and ignore the latter. (Steven Pinker disagrees with many EAs quite sharply on X-risk, but most of his last book was great!)
Even when there is a cost to participating, someone who considers the topic important enough can choose to bear it.
This isn’t always true, unless you use a circular definition of “important”. As written, it implies that anyone who can’t bear to participate must not consider the topic “important enough”, which is empirically false. Our capacity to do any form of work (physical or mental) is never fully within our control. The way we react to certain stimuli (sights, sounds, ideas) is never fully within our control. If we decided to render all the text on the EA Forum at a 40-degree angle, we’d see our traffic drop, and the people who left wouldn’t just be people who didn’t think EA was sufficiently “important”.
In a similar vein:
The more committed [you are] to a cause, the more you are willing to endure for it. We agree with CEA that committed EAs are several times more valuable than those who are vaguely aligned, so that we should [be] optimising the movement for attracting more committed members.
Again, this is too simplistic. If we could have 100 members who committed 40 hours/week or 1000 members who committed 35 hours/week, we might want to pursue the second option, even if we weren’t “optimizing for attracting more committed members”. (I don’t speak for CEA here, but it seems to me like “optimize the amount of total high-fidelity and productive hours directed at EA work” is closer to what the movement wants, and even that is only partly correlated with “create the best world we can”.)
You could also argue that “better” EAs tend to take ideas more seriously, that having a strong negative reaction to a dangerous idea is a sign of seriousness, and that we should therefore be trying very hard to accommodate people who have reportedly had very negative reactions to particular ideas within EA. This would also be too simplistic, but there’s a kernel of truth there, just as there is in your statement about commitment.
Even if limiting particular discussions would clearly be good, once we’ve decided to limit discussions at all, we’ve opened the door to endless discussion and debate about what is or is not unwelcoming (see Moderator’s Dilemma). And ironically, these kinds of discussions tend to be highly partisan, political and emotional.
The door is already open. There are dozens of preexisting questions about which forms of discussion we should permit within EA, on specifically the EA Forum, within any given EA cause area, and so on. Should we limit fundraising posts? Posts about personal productivity? Posts that use obscene language? Posts written in a non-English language? Posts that give investing advice? Posts with graphic images of dying animals? I see “posts that discuss Idea X” as another set of examples in this very long list. They may be more popular to argue about, but that doesn’t mean we should agree never to limit them just to reduce the incidence of arguments.
We note that such a conclusion would depend on an exceptionally high quantity of alienating discussions, and is prima facie incompatible with the generally high rating for welcomingness reported in the EA survey. We note that there are several possible other theories.
I don’t think the authors of the Making Discussions Inclusive post would disagree. I don’t see any conclusion in that post that alienating discussions are the main factor in the EA gender gap; all I see is the claim, with some evidence from a poll, that alienating discussions are one factor, along with suggestions for reducing the impact of that particular factor.
It is worthwhile considering the example of Atheism Plus, an attempt to insist that atheists also accept the principles of social justice. This was incredibly damaging and destructive to the atheist movement due to the infighting that it led to and was perhaps partly responsible for the movement’s decline.
I don’t have any background on Atheism Plus, but as a more general point: Did the atheism movement actually decline? While the r/atheism subreddit is now ranked #57 by subscriber count (as of 13 March 2019) rather than #38 (4 July 2015), the American atheist population seems to have been fairly flat since 1991, and British irreligion is at an all-time high. Are there particular incidents (organizations shutting down, public figures renouncing, etc.) that back up the “decline” narrative? (I would assume so, I’m just unfamiliar with this topic.)
There were some things I liked about this post, but my comments here will mostly involve areas where I disagree with something. Still, criticism notwithstanding:
I appreciate the moves the post makes toward being considerate (the content note, the emphasis on not calling out individuals).
Two points from the post that I think are generally correct and somewhat underrated in debates around moderation policy: You can’t please everyone, and power relations within particular spaces can look very different than power relations outside of those spaces. This also rang true (though I consider it a good thing for certain “groups” to be disempowered in public discussion spaces):
There is a negative selection effect in that the more that a group is disempowered and could benefit from having its views being given more consideration, the less likely it is to have to power to make this happen.
The claim that we should not have “limited discussions” is closing the barn door after the horse is already out. The EA Forum, like almost every other discussion space, has limits already. Even spaces that don’t limit “worldly” topics may still have meta-limits on style/discourse norms (no personal attacks, serious posts only, etc.). Aside from (maybe?) 4Chan, it’s hard to think of well-known discussion spaces that truly have no limits. For example, posts on the EA Forum:
Can’t advocate the use of violence.
Are restricted in the types of criticism they can apply: “We should remove Cause X from EA because its followers tend to smell bad” wouldn’t get moderator approval, even if no individually smelly people were named.
While I don’t fully agree with every claim in Making Discussions Inclusive, I appreciated the way that its authors didn’t call for an outright ban on any particular form of speech—instead, they highlighted the ways that speech permissions may influence other elements of group discussion, and noted that groups are making trade-offs when they figure out how to handle speech.
This post also mostly did this, but occasionally slipped into more absolute statements that don’t quite square with reality (though I assume one is meant to read the full post while keeping the word “usually” in mind, to insert in various places). An example:
We believe that someone is excluded to a greater degree when they are not allowed to share their sincerely held beliefs than when they are merely exposed to beliefs that they disagree with.
This seems simplistic. The reality of “exclusion” depends on which beliefs are held, which beliefs are exposed, and the overall context of the conversation. I’ve seen conversations where someone shoehorned their “sincerely held beliefs” into a discussion to which they weren’t relevant, in such an odious way that many people who were strained on various resources (including “time” and “patience”) were effectively forced out. Perhaps banning the shoehorning user would have excluded them to a “greater degree”, but their actions excluded a lot of people, even if to a “lesser degree”. Which outcome would have been worse? It’s a complicated question.
I’d argue that keeping things civil and on-topic is frequently less exclusionary than allowing total free expression, especially as conversations grow, because some ideas/styles are repellent to almost everyone. If someone insists on leaving multi-page comments with Caps Lock on in every conversation within a Facebook group, I’d rather ask them to leave than ask the annoyed masses to grit their teeth and bear it.
This is an extreme example, of course, so I’ll use a real-world example from another discussion space I frequent: Reddit.
On the main Magic: The Gathering subreddit, conversations about a recent tournament winner (a non-binary person) were frequently interrupted by people with strong opinions about the pronoun “they” being “confusing” or “weird” to use for a single person.
This is an intellectual position that may be worth discussing in other contexts, but in the context of these threads, it appeared hundreds of times and made it much more tedious to pick out actual Magic: The Gathering content. Within days, these users were being kicked out by moderators, and the forum became more readable as a result, to what I’d guess was the collective relief of a large majority of users.
The general point I’m trying to make:
“Something nearly everyone dislikes” is often going to be worth excluding even from the most popular, mainstream discussion venues.
In the context of EA, conversations that are genuinely about effective do-gooding should be protected, but I don’t think several of your examples really fit that pattern:
Corruption in poor countries being caused by “character flaws” seems like a non sequitur.
When discussing ways to reduce corruption, we can talk about history, RCT results, and economic theory—but why personal characteristics?
Even if it were the case that people in Country A were somehow more “flawed” than people in Country B, this only matters if it shows up in our data, and at that point, it’s just a set of facts about the world (e.g. “government officials in A are more likely to demand bribes than officials in B, and bribery demands are inversely correlated with transfer impact, which means we should prefer to fund transfers in B”). I don’t see the point of discussing the venality of the A-lish compared to the B-nians separately from actual data.
I think honest advocates for cash-transfer RCTs could quite truthfully state that they aren’t trying to study whether poor people are “lazy”. Someone’s choice not to work doesn’t have to be the target of criticism, even if it influences the estimated benefit of a cash transfer to that person. It’s also possible to conclude that poor people discount the future without attaching the “character flaw” label.
Frankly, labels like this tend to obscure discussion more than they help, by obscuring actual data and creating fake explanations (“poor people don’t care as much about the future, which is bad” < “poor people don’t care as much about the future, but this is moderated by factors A and B, and is economically rational if we factor in C, and here’s a model for how we can encourage financial planning by people at different income levels”).
The same problem applies to your discussion of female influence and power; whether or not a person’s choices have led them to have less power seems immaterial to understanding which distributions of power tend to produce the best outcomes, and how particular policies might move us toward the best distributions.
To summarize the list of points above: In general, discussions of whether a state of the world is “right”, or whether a person is “good” or “deserving”, don’t make for great EA content. While I wouldn’t prohibit them, I think they are far more tempting than they are useful, and that we should almost always try to use “if A, then B” reasoning rather than “hooray, B!” reasoning.
Of course, “this reasoning style tends to be bad” doesn’t mean “prohibit it entirely”. But it makes the consequence of limiting speech topics seem a bit less damaging, compared to what we could gain by being more inclusive. (Again, I don’t actually think we should add more limits in any particular place, including the EA Forum. I’m just pointing out considerations that other EAs might want to make when they think about these topics.)