I found this post by Rob Bensinger of anonymous comments on EA from 2017 , with the question prompt:
If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.].
Many still resonate today. I recommend reading the whole list, but there are a lot—so I’ve chosen a few highlights and comment exchanges I thought were particularly interesting. I’ve shortened a few for brevity (indicated by ellipses).
I don’t agree with many of these comments, but it’s interesting to see how people perceived things back then.
#28 - on people dismissing those who start as “ineffective altruists” (top voted comment with 23 karma)
I have really positive feelings towards the effective altruism community on the whole. I think EA is one of the most important ideas out there right now.
However, I think that there is a lot of hostility in the movement towards those of us who started off as ‘ineffective altruists,’ as opposed to coming from the more typical Silicon Valley perspective. I have a high IQ, but I struggled through college and had to drop out of a STEM program as a result of serious mental health disturbances. After college, I wanted to make a difference, so I’ve spent my time since then working in crisis homeless shelters. … I know that the work I’ve done isn’t as effective as what the Against Malaria Foundation does, but I’ve still worked really hard to help people, and I’ve found that my peers in the movement have been very dismissive of it.
I’m really looking to build skills in an area where I can do more effective direct work. I keep hearing that the movement is talent-constrained, but it isn’t clearly explained anywhere what the talent constraints are, specifically. I went to EA Global hoping for career advice—an expensive choice for someone in social work! -- but even talking one-on-one with Ben Todd, I didn’t get any actionable advice. There’s a lot of advice out there for people who are interested in earning to give, and for anyone who already has great career prospects, but for fuck-ups like me, there doesn’t seem to be any advice on skills to develop, how to go back to school, or anything of that kind.
When I’ve tried so hard to get any actionable advice whatsoever about what I should do, and nobody has any… that’s a movement that isn’t accessible to me, and isn’t accessible to a lot of people, and it makes me want to ragequit. …
#40 - the community should be better at supporting its members
I’m the leader of a not-very-successful EA student group. I don’t get to socialize with people in EA that much.
I wish the community were better at supporting its members in accomplishing things they normally couldn’t. I feel like almost everyone just does the things that they normally would. People that enjoy socializing go to meetups (or run meetups); people that enjoy writing blog posts write blog posts; people that enjoy commenting online comment online; etc.
Very few people actually do things that are hard for them, which means that, for example, most people aren’t founding new EA charities or thinking original thoughts about charity or career evaluation or any of the other highly valuable things that come out of just a few EA people. And that makes sense; it doesn’t work to just force yourself to do this sort of thing. But maybe the right forms of social support and reward could help.
I think that mentorship and guidance are lacking and undervalued in the EA community. This seems odd to me. Everyone seems to agree that coordination problems are hard, that we’re not going to solve tough problems without recruiting additional talent, and that outreach in the “right” places would be good. Functionally, however, most individuals in the community, most organizations, and most heads of organizations seem to act as though they can make a difference through brute force alone.
I also don’t get the impression that most EA organizations and heads of EA organizations are keen on meeting or working with new and interested people. People affiliated with EA write many articles about increasing personal productivity; I have yet to read a single article about increasing group effectiveness.
80,000 Hours may be the sole exception to this rule, though I haven’t formally gone through their coaching program, so I don’t know what their pipeline is like. CFAR also seems to be addressing some of these issues, though their workshops are still prohibitively expensive for lots of people, especially newcomers. EA outreach is great, but once people have heard about EA, I don’t think it’s clear what they should do or how they should proceed.
The final reason why I find this odd is because in most professional settings, mentorship is explicitly valued. Even high-status people who have plenty of stuff on their plate will set aside some time for service.
My model for why this is happening has two parts. First, I think there is some selection effect going on; most people in EA are self-starters who came on board and paved their own path. (That’s great and all, but do people think that most major organizations and movements got things done solely by a handful of self-starters trying to cooperate?)
Second, I think it might be the case that most people are good at doing cost-benefit analyses on how much impact their pet project will have on the world, but aren’t thinking about the multiplier effect they could have by helping other people be effective. (This is often because they are undervaluing the effectiveness of other, relatively not-high-status people.)
Reply from Daniel Eth:
Another possibility is that most people in EA are still pretty young, so they might not feel like they’re really in a position to mentor anyone.
My system-1 concerns about EA: the community exhibits a certain amount of conformism, and a general unwillingness to explore new topics. … The reason I think this is an issue is the general lack of really new proposals in EA discussion posts. … The organization that seemed to me the most promising for dealing with unknown unknowns (CFAR, who are in a unique position to develop new thinking techniques to deal with this) has recently committed to AI risk in a way that compromises the talent they could have directed to innovative EA.
Many practitioners strike me as being dogmatic and closed-minded. They maintain a short internal whitelist of things that are considered ‘EA’—e.g., working at an EA-branded organization, or working directly on AI safety. If an activity isn’t on the whitelist, the dogmatic (and sometimes wrong) conclusion is that it must not be highly effective. I think that EA-associated organizations and AI safety are great, but they’re not the only approaches that could make a monumental difference. If you find yourself instinctively disagreeing, then you might be in the group I’m talking about. :)
People’s natural response should instead be something like: ‘Hmm, at first blush this doesn’t seem effective to me, and I have a strong prior that most things aren’t effective, but maybe there’s something here I don’t understand yet. Let’s see if I can figure out what it is.’
Level of personal involvement in effective altruism: medium-high. But I wouldn’t be proud to identify myself as EA.
#39 - on EA having picked all the low-hanging fruit
Level of involvement: I’m not an EA, but I’m EA-adjacent and EA-sympathetic.
EA seems to have picked all the low-hanging fruit and doesn’t know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It’s hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it’s hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.
The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there’s not much new to say to change people’s minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren’t any.)
Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin’s concerns), and value drift.
And others (there’s overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.
What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there’s not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it’s stuck as it is, at best—which is discouraging, but if that’s the reality, EAs should accept it.
#32 - on the limtiations of single orgs fixing problems
There seems to be a sense in effective altruism that the existence of one organization working on a given problem means that the problem is now properly addressed. The thought appears to be: ‘(Organization) exists, so the space of evaluating (organization function) is filled and the problem is therefore taken care of.’
Organizations are just a few people working on a problem together, with some slightly better infrastructure, stable funding, and time. The problems we’re working on are too big for a handful of people to fix, and the fact that a handful of people are working in a given space doesn’t suggest that others shouldn’t work on it too. I’d like to see more recognition of the conceptual distinction between the existence of an organization with a certain mission, and what exactly is and is not being done to accomplish that mission. We could use more volunteers/partners to EA organizations, or even separate organizations addressing the same issue(s) using a different epistemology.
To encourage this, I’d love to see more support for individuals doing great projects who are better suited to the flexibility of doing work independently of any organization, or who otherwise don’t fit a hole in an organization.
#32b) - on EA losing existing and failing to gain new high-value people
The high-value people from the early days of effective altruism are disengaging, and the high-value people who might join are not engaging. There are people who were once quite crucial to the development of EA ‘fundamentals’ who have since parted ways, and have done so because they are disenchanted with the direction in which they see us heading.
More concretely, I’ve heard many reports to the effect: ‘EA doesn’t seem to be the place where the most novel/talented/influential people are gravitating, because there aren’t community quality controls.’ While inclusivity is really important in most circumstances, it has a downside risk here that we seem to be experiencing. I believe we are likely to lose the interest and enthusiasm of those who are most valuable to our pursuits, because they don’t feel like they are around peers, and/or because they don’t feel that they are likely to be socially rewarded for their extreme dedication or thoughtfulness.
I think that the community’s dip in quality comes in part from the fact that you can get most of the community benefits without being a community benefactor—e.g. invitations to parties and likes on Facebook. At the same time, one incurs social costs for being more tireless and selfless (e.g., skipping parties to work), for being more willing to express controversial views (e.g., views that conflict with clan norms), or for being more willing to do important but low-status jobs (e.g., office manager, assistant). There’s a lot that we’d need to do in order to change this, but as a first step we should be more attentive to the fact that this is happening.
On the Bay Area community
#18 - on improving status in the Bay Area community so people feel less insecure
Speaking regarding the Bay Area effective altruism community: There’s something about status that could be improved. On the whole, status (and what it gets you) serves a valuable purpose; it’s a currency used to reward those producing what the community values. The EA community is doing well at this in that it does largely assign status to people for the right things. At the same time, something about how status is being done is leaving many people feeling insecure and disconnected.
I don’t know what the solution is, but you said magic wand, so I’ll punt on what the right response should be.”
If I could change the effective altruism community tomorrow, I would move it somewhere other than the Bay Area, or at least make it more widely known that moving to the Bay is defecting in a tragedy of the commons and makes you Bad.
If there were large and thriving EA communities all over the place, nobody would need to move to the Bay, we’d have better outreach to a number of communities, and fewer people would have to move a long distance, get US visas, or pay a high rent in order to get seriously involved in EA. The more people move to the Bay, the harder it is to be outside the Bay, because of the lack of community. If everyone cooperated in developing relatively local communities, rather than moving to the bay, there’d be no need to move to the Bay in the first place. But we, a community that fangirls over ‘Meditations on Moloch’ (http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and prides itself on working together to get shit done, can’t even cooperate on this simple thing.
I know people who are heartbroken and depressed because they need community and all their partners are in the Bay and they want to contribute, but they can’t get a US visa or they can’t afford Bay Area rent levels, so they’re stuck friendless and alone in whatever shitty place they were born in. This should not be a hard problem to solve if we apply even a little thought and effort to it; any minimally competent community could pull this off.
The way that we talk about policy in the effective altruism community is unsophisticated. I understand that this isn’t most EAs’ area of expertise, but in that case just running around and saying ‘we should really get EAs into policy’ is pretty unhelpful. Anyone who is fairly inexperienced in ‘policy’ could quickly get a community-knowledge comparative advantage just by spending a couple of months doing self-study and having conversations, and could thereby start helpfully orienting our general cries for more work on ‘policy.’
To be fair, there are some people doing this. But why not more?
On newcomers
#3 - talking about MIRI to newcomers makes you seem biased
Stop talking about AI in EA, at least when doing EA outreach. I keep coming across effective altruism proponents claiming that MIRI is a top charity, when they seem to be writing to people who aren’t in the EA community who want to learn more about it. Do they realize that this comes across as very biased? It makes it seem like ‘I know a lot about an organization’ or ‘I have friends in this organization’ are EA criteria. Most importantly, talking about AI in doomsday terms sounds kooky. It stands apart from the usual selections, as it’s one of the few that’s ‘high stakes.’ I rarely see effective altruists working towards environmental, political, anti-nuclear, or space exploration solutions, which I consider of a similar genre. I lose trust in an effective altruist’s evaluations when they evaluate MIRI to be an effective charity.
I’ve read a few articles and know a few EA people.
I work for an effective altruism organization. I’d say that over half of my friends are at least adjacent to the space and talk about EA-ish topics regularly.
The thing I’d most like to change is the general friendliness of first-time encounters with EA. I think EA Global is good about this, but house parties tend to have a very competitive, emotionally exhausting ‘everyone is sizing you up’ vibe, unless you’re already friends with some people from another context.
Next-most-important (and related), probably, is that I would want everyone to proactively express how much confidence they have their statements in some fashion, through word choice, body language, and tone of voice, rather than providing a numerical description only when explicitly asked. This can prevent false-consensus effects and stop people from assuming that a person must be totally right because they sound so confident.
More selfishly, another thing I wish for is more social events that consist of 10-20 people doing something in the daytime with minimal drugs, rather than 50-100 people at a wild party. I just enjoy small daytime gatherings so much more, and I would like to get closer to the community, but I rarely have the energy for parties.
#5 - lack of good advice to newcomers beyond donate & advocate
At multiple EA events that I’ve been to, new people who were interested and expressed curiosity about what to do next were given no advice beyond ‘donate money and help spread the message’—even by prominent EA organizers. My advice to the EA community would be to stop focusing so much on movement-building until (a) EA’s epistemics have improved, and (b) EAs have much more developed and solid views (if not an outright consensus) about the movement’s goals and strategy.
To that end, I recommend clearly dividing ‘cause-neutral EA’ from ‘cause-specific effectiveness’. The lack of a clear divide contributes to the dilution of what EA means. (Some recent proposals I’ve seen framed by people as ‘EA’ have included a non-profit art magazine and a subcommunity organized around fighting Peter Thiel.) If we had a notion of ‘in this space/forum/organization, we consider the most effective thing to do given that one cares primarily about art’ or ‘given that one is focused on ending Alzheimer’s, what is the most effective thing to do?‘, then people could spend more time seriously discussing those questions and less bickering over what counts as ‘EA.’
The above is if we want a big-tent approach. I’m also fine with just cause-neutral evaluation and the current-seemingly-most-important-from-a-cause-neutral-standpoint causes being deemed ‘EA’ and all else clearly being not, no matter who that makes cranky.
#23 - on bait & switch, EA as principles, getting an elite team at meta orgs like CFAR / CEA
I used to work for an organization in EA, and I am still quite active in the community.
1 - I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but—wink, nod—that’s just what we do to get people in the door so that we can convert them to helping out with AI / animal suffering / (insert weird cause here).’ This disturbs me.
2 - In general, I think that EA should be a principle, not a ‘movement’ or set of organizations. I see no reason that religious charities wouldn’t benefit from exposure to EA principles, for example.
3 - I think that the recent post on ‘Ra’ was in many respects misguided, and that in fact a lack of ‘eliteness’ (or at least some components of it) is one of the main problems with many EA organizations.
There’s a saying, I think from Eliezer, that ‘the important things are accomplished not by those best suited to do them, or by those who ought to be responsible for doing them, but by whoever actually shows up.’ That saying is true, but people seem to use this as an excuse sometimes. There’s not really any reason for EA organizations to be as unprofessional and inefficient as they are. I’m not saying that we should all be nine-to-fivers, but I’d be very excited to see the version of the Centre for Effective Altruism or the Center for Applied Rationality that cared a lot about being an elite team that’s really actually trying to get things done, rather than the version that’s sorta ad-hoc ‘these are the people who showed up.’
4 - Things are currently spread over way too many sources: Facebook, LessWrong, the EA Forum, various personal blogs, etc.
Rob Bensinger replied:
I’d be interested to hear more about examples of things that CEA / CFAR / etc. would do differently if they were ‘an elite team that’s really actually trying to get things done’; some concreteness there might help clarify what the poster has in mind when they say there are good things about Ra that EA would benefit from cultivating.
For people who haven’t read the post, since it keeps coming up in this thread: my impression is that ‘Ra’ is meant to refer to something like ‘impersonal, generic prestige,’ a vague drive toward superficially objective-seeming, respectable-seeming things. Quoting Sarah’s post:
“Ra involves seeing abstract, impersonal institutions as more legitimate than individuals. For instance, I have the intuition that it is gross and degrading to pay an individual person to clean your house, but less so to hire a maid service, and still less so if a building that belongs to an institution hires a janitor. Institutions can have authority and legitimacy in a way that humans cannot; humans who serve institutions serve Ra.
“Seen through Ra-goggles, giving money to some particular man to spend on the causes he thinks best is weird and disturbing; putting money into a foundation, to exist in perpetuity, is respectable and appropriate. The impression that it is run collectively, by ‘the institution’ rather than any individual persons, makes it seem more Ra-like, and therefore more appealing. [...]
“If Horus, the far-sighted, kingly bird, represents “clear brightness” and “being the rightful and just ruler”, then Ra is a sort of fake version of these qualities. Instead of the light that distinguishes, it’s the light too bright to look at. Instead of clear brightness, it’s smooth brightness.
“Instead of objectivity, excellence, justice, all the “daylight” virtues associated with Horus (what you might also call Apollonian virtues), Ra represents something that’s also shiny and authoritative and has the aesthetic of the daylight virtues, but in an unreal form.
“Instead of science, Ra chooses scientism. Instead of systematization and explicit legibility, Ra chooses an impression of abstract generality which, upon inspection, turns out to be zillions of ad hoc special cases. Instead of impartial justice, Ra chooses a policy of signaling propriety and eliteness and lack of conflicts of interest. Instead of excellence pointed at a goal, Ra chooses virtuosity kept as an ornament.
“(Auden’s version of Apollo is probably Ra imitating the Apollonian virtues. The leadership-oriented, sunnily pragmatic, technological approach to intellectual affairs is not always phony — it’s just that it’s the first to be corrupted by phonies.)
“Horus is not Ra. Horus likes organization, clarity, intelligence, money, excellence, and power — and these things are genuinely valuable. If you want to accomplish big goals, it is perfectly rational to seek them, because they’re force multipliers. Pursuit of force multipliers — that is, pursuit of power — is not inherently Ra. There is nothing Ra-like, for instance, about noticing that software is a fully general force multiplier and trying to invest in or make better software. Ra comes in when you start admiring force multipliers for no specific goal, just because they’re shiny.
“Ra is not the disposition to seek power for some goal, but the disposition to approve of power and to divert it into arbitrariness. It is very much NOT Machiavellian; Machiavelli would think it was foolish.”
Nick Tarleton replied:
Huh. I really like and agree with the post about Ra, but also agree that there are things about… being a grown-up organization?… that some EA orgs I’m aware of have been seriously deficient in in the past. I don’t know whether some still are; it seems likely a priori. I can see how a focus on avoiding Ra could cause neglect of those things, but I still think avoiding Ra is critically important, it just needs to be done smarter than that. (Calling the thing ‘eliteness’, or positively associating it with Ra, feels like a serious mistake, though I can’t articulate all of my reasons why, other than that it seems likely to encourage focusing on image over substance. I think calling it ‘grown-upness’ can encourage that as well, and I don’t know of a framing that wouldn’t (this is an easy thing to mistake image for / do fronting about, and focusing on substance over image seems like an irreducible skill / mental posture), but ‘eliteness’ feels particularly bad. ‘Professionalism’ feels in between.)
Anonymous #23 replied:
CEA’s internal structure is very ad-hoc and overly focused on event planning and coordination, at least in my view. It also isn’t clear that what they’re doing is useful. I don’t really see the value add of CEA over what Leverage was doing back when Leverage ran the EA Summit.
Most of the cool stuff coming out of the CEA-sphere seems to be done by volunteers anyway. This is not to denigrate their staff, just to question ‘Where’s the beef?’ when you have 20+ people on the team.
For that matter, why do conversations like these mostly happen on meme groups and private Facebook walls instead of being facilitated or supported by CEA?
Looking at the CFAR website, it seems like they have something like 14-15 employees, contractors, and instructors, of which only 3-4 have research as part of their job? That’s… not a good ratio for an organization with a mission that relies on research, and maybe this explains why there hasn’t been too much cool new content coming out of that sector?
To put things another way, I don’t have a sense of rapid progress being made by these organizations, and I suspect that it could be with the right priorities. MIRI certainly has its foibles, but if you look over there it seems like they’re much more focused/productive, and it’s readily apparent how each of their staffers contributes to the primary objective. Were I to join MIRI, I think I would have a clear sense of, ‘Here I am, part of a crack team working to solve this big problem. Here’s how we’re doing it.’ I don’t get that sense from any other EA organizations.
As for ‘Ra,’ it’s not that I think fake prestige is good; it’s that I think people way overcorrect, shying away from valid prestige in the name of avoiding fake prestige. This might be a reflection of the Bay Area and Oxford ‘intellectual techie’ crowds more than EA in general, but it’s silly any way you slice it.
I want an EA org whose hiring pitch is: ‘We’re the team that is going to solve (insert problem), and if you join us everyone you work with will be smart, dedicated, and hardworking. We don’t pay as much as the private sector, but you’ll do a ton more, with better people, more autonomy, and for a better cause. If that sounds good, we’d love to talk to you.’
This is a fairly ‘Ra’-flavored pitch, and obviously it has to actually be true, but I think a lot of EAs shy away from aiming for this sort of thing, and instead wind up with a style that actually favors ‘scrappiness’ and ‘we’re the ones who showed up.’ I bet my pitch gets better people.
I found this post by Rob Bensinger of anonymous comments on EA from 2017 , with the question prompt:
Many still resonate today. I recommend reading the whole list, but there are a lot—so I’ve chosen a few highlights and comment exchanges I thought were particularly interesting. I’ve shortened a few for brevity (indicated by ellipses).
I don’t agree with many of these comments, but it’s interesting to see how people perceived things back then.
Highlights
On supporting community members
Related: Should the EA community be cause-first or member-first?
#28 - on people dismissing those who start as “ineffective altruists” (top voted comment with 23 karma)
#40 - the community should be better at supporting its members
#22 - on a lack of mentorship and guidance
Reply from Daniel Eth:
On conformism / dogmatism
#1 - on conformism & CFAR commiting to AI risk
#27
#39 - on EA having picked all the low-hanging fruit
#32 - on the limtiations of single orgs fixing problems
#32b) - on EA losing existing and failing to gain new high-value people
On the Bay Area community
#18 - on improving status in the Bay Area community so people feel less insecure
#8 - move EA to somewhere that’s not the Bay Area
Related: Say “nay!” to the Bay (as the default)!
#34 - EA is unsophisticated regarding policy
See:
On newcomers
#3 - talking about MIRI to newcomers makes you seem biased
#31 - make EA more welcoming to newcomers
#5 - lack of good advice to newcomers beyond donate & advocate
#23 - on bait & switch, EA as principles, getting an elite team at meta orgs like CFAR / CEA
Rob Bensinger replied:
Nick Tarleton replied:
Anonymous #23 replied:
Julia Wise of CEA replied:
This meme about ‘being the ones who show up’ is not something I’d heard before, but it explains a lot.