Anonymous EA comments
After seeing some of the debate last month about effective altruism’s information-sharing / honesty / criticism norms (see Sarah Constantin’s follow-up and replies from Holly Elmore (1,2), Rob Wiblin (1, 2), Jacy Rees, Christopher Byrd), I decided to experiment with an approach to getting less filtered feedback. I asked folks over social media to anonymously answer this question:
If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.]
I got a lot of high-quality responses, and some people suggested that I cross-post them to the EA Forum for further discussion. I’ve posted paraphrased version of many of the responses below. Some cautions:
1. I have no way to verify the identities of most of the respondents, so I can’t vouch for the reliability of their impressions or anecdotes. Anonymity removes some incentives that keep people from saying what’s on their mind, but it also removes some incentives to be honest, compassionate, thorough, precise, etc. I also have no way of knowing whether a bunch of these submissions come from a single person.
2. This was first shared on my Facebook wall, so the responses are skewed toward GCR-oriented people and other sorts of people I’m more likely to know. (I’m a MIRI employee.)
3. Anonymity makes it less costly to publicly criticize friends and acquaintances, which seems potentially valuable; but it also makes it easier to make claims without backing them up, and easier to widely spread one-sided accounts before the other party has time to respond. If someone writes a blog post titled ‘Rob Bensinger gives babies ugly haircuts’, that can end up widely shared on social media (or sorted high in Google’s page rankings) and hurt my reputation with others, even if I quickly reply in the comments ‘Hey, no I don’t.’ If I’m too busy with a project to quickly respond, it’s even more likely that a lot of people will see the post but never see my response.
For that reason, I’m wary of giving a megaphone to anonymous unverified claims. Below, I’ve tried to reduce the risk slightly by running comments by others and giving them time to respond (especially where the comment named particular individuals/organizations/projects). I’ve also edited a number of responses into the same comment as the anonymous submission, so that downvoting and direct links can’t hide the responses.
4. If people run experiments like this in the future, I encourage them to solicit ‘What are we doing right?’ feedback along with ‘What would you change?’ feedback. Knowing your weak spots is important, but if we fall into the trap of treating self-criticism alone as virtuous/clear-sighted/productive, we’ll end up poorly calibrated about how well we’re actually doing, and we’re also likely to miss opportunities to capitalize on and further develop our strengths.
- 12 Aug 2023 14:04 UTC; 42 points) 's comment on Vaidehi Agarwalla’s Quick takes by (
Anonymous #28:
I want to hug this person so much!
I want to encourage this person to:
Write about what you’ve learned doing direct work that might be relevant to EAs.
Reach out to me if I can be helpful with this in any way.
Keep doing the good work you know how to do, if you don’t see any better options.
Stay alert for high-leverage opportunities to do more, including opportunities you can see and other EAs can’t, where additional funding or people or expertise that EAs might have would be helpful.
so much!
“Keep doing the good work you know how to do, if you don’t see any better options” still sounds implicitly dismissive to me. It sounds like you believe there are better options, and only a lack of knowledge or vision is keeping this person from identifying them.
Breaking up fistfights and intervening in heroin overdoses to me sound like things that have small-to-moderate chances of preventing catastrophic, permanent harm to the people involved. I don’t know how often opportunities like that come up, but is it so hard to imagine they outstrip a GWWC pledger on an average or even substantially above-average salary?
Meta: this seems like it was a really valuable exercise based on the quality of the feedback. Thank you for conceiving it, running it, and giving thought to the potential side effects and systematic biases that could affect such a thing. It updates me in the direction that the right queries can produce a significant amount of valuable material if we can reduce the friction to answering such queries (esp. perfectionism) and thus get dialogs going.
Definitely agreed. In this spirit, is there any reason not to make an account with (say) a username of username, and a password of password, for anonymous EAs to use when commenting on this site?
I think this would be too open to abuse; see the concerns I raised in the OP.
An example of a variant on this idea that might work is to take 100 established+trusted community members, give them all access to the same forum account, and forbid sharing that account with any additional people.
What about an anonymous forum that was both private and had a strict no object level names, personal or organizational, policy such that ideas could be discussed more freely?
Obviously there’d be grey area on the alluding to object level people and organizations, but I think we can simply elect a king who is reasonable and agree not to squabble about the chosen line.
Anonymous #22:
Another possibility is that most people in EA are still pretty young, so they might not feel like they’re really in a position to mentor anyone.
Anonymous #6:
Relevant resources:
Fact Posts: How and Why
The Open Philanthropy Project’s Shallow Investigations provide nice template examples.
The Neglected Virtue of Scholarship
Scholarship: How to Do It Efficiently
I’m fairly new to the EA Forum, maybe someone who’s been here longer knows of other resources on this site.
Even simpler than fact posts and shallow investigations would be skyping experts in different fields and writing up the conversation. Total time per expert is about 2 hours − 1 hour for the conversation, 1 hour for writing up.
Anonymous #27:
I wish to register my emphatic partial agreement with much of this one, though I do still identify as EA, and have also talked with many people who are quite curious and interested in getting value from learning about new perspectives.
Anonymous #11:
Buck Shlegeris replied:
This. As a meat-eating EA who personally does think animal suffering is a big deal, I’ve found the attitude from some animal rights EAs to be quite annoying. I personally believe that the diet I eat is A) healthier than if I was vegan and B) allows me to be more focussed and productive than if I was vegan, allowing me to do more good overall. I’m more than happy to debate that with anyone who disagrees (and most EAs who are vegan are civil and respect this view), but I have encountered some EAs who refuse to believe that there’s any possibility of either A) or B) being true, which feels quite dismissive.
Contrast that attitude to what happened recently at a Los Angeles EA meetup where we went for dinner. Before ordering, I asked around if anyone was vegan since if there was anyone who was, I didn’t want to eat meat in front of them and offend them. The person next to me said he was vegan, but that if I wanted meat I should order it since “we’re all adults and we want the community to be as inclusive as it can.” I decided to get a vegan dish anyway, but having him say that made me feel more welcome.
Oh wow, thank you! That’s so awesome of you! I greatly appreciate it!
For what it’s worth and as an additional data point, I’m a meat eater and I didn’t feel like this was a big problem at EA Global in 2016. For a gathering in which animal advocacy/veganism is so prevalent, I would have thought it really weird if the conference served meat anyway. The vegetarian food provided was delicious, and the one time I went out to dinner with a group and ordered meat, nobody got up in my face about it.
Yes, that was my general impression of EA global. I feel like most of the people who do get upset about meat eaters in EA are only nominally in EA, and largely interact with the community via Facebook.
Anonymous #13:
Anonymous #1:
this feels really obvious from where I’m sitting but is met with incredulity by most EAs I speak with. Applause lights for new ideas paired with a total lack of engagement when anyone talks about new ideas seems more dangerous than I think we’re giving credit.
See recent pain control brief lee sharkey as example, or Auren Forrester’s stuff on suicide.
I have been observing the same thing. What could we do to spark new ideas? Perhaps a recurring thread dedicated to it on this forum or Facebook, or perhaps a new Facebook group? A Giving Game for unexplored topics? How can we encourage creativity?
Creativity is a learnable skill and also can be encouraged through conversational/group activity norms. http://malcolmocean.com/2016/05/honing-mode-vs-jamming-mode/ https://vimeo.com/89936101
Anonymous #39:
I think EA may have picked the lowest-hanging fruit, but there’s lots of low-ish hanging fruit left unpicked. For example: who, exactly, should be seen as the beneficiaries aka allkind aka moral patients? EAs disagree about this quite a lot, but there hasn’t been that much detailed + broadly informed argument about it inside EA. (This example comes to mind because I’m currently writing a report on it for OpenPhil.)
There are also a great many areas that might be fairly promising, but which haven’t been looked into in much breadth+detail yet (AFAIK). The best of these might count as low-ish hanging fruit. E.g.: is there anything to be done about authoritarianism around the world? Might certain kinds of meta-science work (e.g. COS) make future life science and social science work more robust+informative than it is now, providing highly leveraged returns to welfare?
There is also non-AI global catastrophic risk, like engineered pandemics, and low hanging fruit for dealing with agricultural catastrophes like nuclear winter.
What’s wrong with low hanging fruit? Not entertaining enough?
I agree that we’re in danger of having picked all the low-hanging fruit. But I think there’s room to fix this.
Anonymous #12:
Three points worth mentioning in response:
Most of the people best-known for worrying about AI risk aren’t primarily computer scientists. (Personally, I’ve been surprised by the number of physicists.)
‘It’s self-serving to think that earning to give is useful’ seems like a separate thing from ‘it’s self-serving to think AI is important.’ Programming jobs obviously pay well, so no one objects to people following the logic from ‘earning to give is useful’ to ‘earning to give via programming work is useful’; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, ‘technology X is a big deal’ will frequently imply both ‘technology X poses important risks’ and ‘knowing how to work with technology X is profitable’, so it isn’t surprising to find those beliefs going together.)
If you were working in AI and wanted to rationalize ‘my current work is the best way to improve the world’, then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: “The problem here is that AI risk reducers can’t win. If they’re not computer scientists, they’re decried as uninformed non-experts, and if they do come from computer scientists, they’re promoting and serving themselves.” But the bigger problem is that the latter doesn’t make sense as a self-serving motive.)
Except that on point 3, the policies advocated and strategies being tried aren’t as if people are trying to reduce x risk, they’re as if they’re trying to enable AI to work rather than backfire.
Anonymous #40:
Anonymous #31:
Where are all these crazy EA parties that I keep reading about? The only EA parties I’ve heard of were at EA Global.
My guess is that there is a very large underestimation of the value from a higher baseline level of cross pollination of ideas.
Anonymous #25:
Anonymous #15:
Anonymous #9:
Julia Wise of CEA replied:
The post in question went up yesterday: Clarifying the GWWC Pledge.
Anonymous #14:
Anonymous #14 added:
Link to the Open Philanthropy Project’s current view on giving now vs. later: http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update
Anonymous #10:
Anonymous #4:
I have spoken with two people in the community who felt they didn’t have anyone to turn to who would not throw rationalist type techniques at them when they were experiencing mental health problems. The fix it attitude is fairly toxic for many common situations.
If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.
I strongly agree with this, and I hadn’t heard anyone articulate it quite this explicitly—thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn’t always “use this CFAR technique.”
(I think CFAR are great and a lot of their techniques are really useful. But I’ve also spent a bunch of time feeling bad the fact that I don’t seem able to learn and implement these techniques in the way many other people seem to, and it’s taken me a long time to realise that trying to ‘figure out’ how to fix my problems in a very analytical way is very often not what I need.)
I’d be interested in contributing to something like this (conditional on me having enough mental energy myself to do so!). I tend to hang out mostly with EA and EA-adjacent people who fit this description, so I’ve thought a lot about how we can support each other. I’m not aware of any quick fixes, but things can get better with time. We do seem to have a lot of depressed people, though.
Speculation ahoy:
1) I wonder if, say, Bay area EAs cluster together strongly enough that some of the mental health techniques/habits/one-off-things that typically work best for us are different from the things that work for most people in important ways.
2) Also, something about the way in which status works in the social climate of the EA/LW Bay Area community is both unusual and more toxic than the way in which status works in more average social circles. I think this contributes appreciably to the number and severity of depressed people in our vicinity. (This would take an entire sequence to describe; I can elaborate if asked).
3) I wonder how much good work could be done on anyone’s mental health by sitting down with a friend who wants to focus on you and your health for, say, 30 hours over the course of a few days and just talking about yourself, being reassured and given validation and breaks, consensually trying things on each other, and, only when it feels right, trying to address mental habits you find problematic directly. I’ve never tried something like this before, but I’d eventually like to.
Well, writing that comment was a journey. I doubt I’ll stand by all of what I’ve written here tomorrow morning, but I do think that I’m correct on some points, and that I’m pointing in a few valuable directions.
I’m so intrigued by proposal 3)! I think when a friend is struggling like that I often have a vague feeling of wanting to engage/help in a bigger way than having a few chats about it, and I’m intrigued by this idea of how to do that. And also thinking about myself I think I’d love it if someone did that for me. I’m gonna keep that in mind and maybe try it one day!
I think I would find this super helpful. low-level mental health stuff has contributed to me basically muddling around for years, nowhere near making good on what I could (in my best attempt at probably faulty self assessment) potentially learn and contribute.
Anonymous #32:
Anonymous #32(c):
Anonymous #32(d):
Anonymous #32(e):
This is a great point. In addition to considering “how can we make it easier to get people to change their minds,” I think we should also be asking, “is there good that can still be accomplished even when people are not willing to change their minds?” Sometimes social engineering is most effective when it works around people’s biases and weaknesses rather than trying to attack them head on.
I agree that this is a problem, but I don’t agree with the causal model and so I don’t agree with the solution.
I’d guess that the majority of the people who take the EA Survey are fairly new to EA and haven’t encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality “tips and tricks” to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that’s pretty typical.
TL;DR I don’t think there’s good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
I’d propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments—we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you’ll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they’ve barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.
Perhaps one implication of this is it’s better to target movement growing efforts at students (particularly undergrads), since they’re less likely to have already made up their minds?
Anonymous #32(b):
What communities are the most novel/talented/influential people gravitating towards? How are they better?
I upvoted this mostly because it was new information to me, but I have the same questions as Richard.
Anonymous #32(a):
Anonymous #29:
Anonymous #8:
There’s a lot of EA outside the Bay! The Oxford/London cluster in particular is quite nice (although I live there, so I’m biased).
+1 London community is awesome. Also heard very good things about the Berlin & Vancouver communities.
I can recommend Berlin! Also biased. ;-)
Anonymous #37:
It’s fascinating how diverse the movement is in this regard. I’ve only found a single moral realist EA who had thought about metaethics and could argue for it. Most EAs around me are antirealists or haven’t thought about it.
(I’m antirealist because I don’t know any convincing arguments to the contrary.)
My impression is that many of the founders of the movement are moral realists and professional moral philosophers e.g. Peter Singer published a book arguing for moral realism in 2014 (“The Point of View of the Universe”).
Plus some who at least put some non-negligible probability on moral realism, in some kind of moral uncertainty framework.
Ah, cool! I should read it.
Anonymous #17:
Anonymous #5:
I think I’m the one being called out with the reference to “a non-profit art magazine” being framed as EA-relevant, so I’ll respond here. I endorse the commenter’s thought that
If I’m understanding the proposal correctly, it’s envisioning something like a reddit-style set of topic-specific subforums in which EA principles could be discussed as they relate to that topic. What I like about that solution is that it allows for the clarity of discussion boundaries that the commenter desires, but still includes discussions of cause-specific effectiveness within the broader umbrella of EA, which helps to facilitate cross-pollination of thinking across causes and from individual causes to the more global cause-neutral space.
Anonymous #34:
Anonymous #21:
Anonymous #16:
Other Open Phil links about AI: 2015 cause report, 2016 background blog post.
I’m confused by the bit about this not being reflected in organizations’ public faces? Early in 2016 OpenPhil announced they would be making AI risk a major priority.
Anonymous #3:
Anonymous #23:
Rob Bensinger replied:
Nick Tarleton replied:
Anonymous #23 replied:
Julia Wise of CEA replied:
Anonymous #35:
Anonymous #24:
Anonymous #33:
Anonymous #38:
There are versions of this I endorse, and versions I don’t endorse. Anon #38 seems to be interpreting #33 as saying ‘let’s be less tolerant of normal people/behaviors’, but my initial interpretation of #33 was that they were saying ‘let’s be more tolerant of weird people/behaviors’.
Anonymous #18:
Anonymous #26:
Anonymous #7:
Rob Bensinger replied:
Anonymous #36:
Anonymous #2:
I originally downvoted this comment, because some of the suggestions obviously suck, but some of the points here could be improved.
There are a lot of effective altruists who have just as good ideas as anyone working at an EA non-profit, or a university, but due to a variety of circumstances, they’re not able to land those jobs. Some effective altruists already run Patreons for their blogs, and I think the material coming out of them is decent, especially as they can lend voices independent of institutions on some EA subjects. Also, they have the time to cover or criticize certain topics other effective altruists aren’t since their effort is taken up by a single research focus.
Nothing can be done about this criticism if some numbers aren’t given. Criticizing certain individuals for getting paid too much, or criticizing certain organizations for paying their staff too much, isn’t an actionable criticism unless one gets specific. I know EA organizations whose staff, including the founders who decide the budget, essentially get paid minimum wage. On the other hand, Givewell’s cofounders Holden and Elie get paid well into the six figures each year. While I don’t myself much care, I’ve privately chatted with people who perceive this as problematic. Then, there may be some staff at some EA organizations who may appear to others to get paid more than they deserve, especially when their salaries may be able to pay for one or more full-time salaries as other individuals perceived to be just as competent. That last statement was full of conditionals, I know, but it’s something I’m guessing they anonymous commenter was concerned about.
Again, they’d need to be specific about what organization they’re talking about. The biggest problem with this comment is the commenter made broad, vague generalizations which aren’t actionable. It’s uncomfortable to make specific criticisms of individuals or organizations, yes, but the point of an anonymous criticism is to be able to do that if it’s really necessary with virtual impunity, while bad commentary which are more or less character assassinations can easily be written off without a flamewar ensuing, or feelings not getting as hurt.
Anyway, I too can sympathize with demands for more accountability, governance and oversight at EA organizations. For example, many effective altruists have been concerned time and again with the influence of major organizations like the Centre for Effective Altruism which, even if its not their intent, may be perceived to represent and speak for the movement as a whole. This could be a problem. However, while EA need not only be a social movement predicated on and mediated through registered NPOs, it by and large is and will continue to be in practice, as many social movements which are at all centralized are. Making special asks for changes in governance at these organizations to become more democratic without posting to the EA Forum directly and making the suggestions consistent with how NPOs at all operate in a given jurisdiction will just not result in change. These suggestions really stand out considering they’re more specific than I’ve seen anyone call for, as if this is a desperate problem in EA, when at most I’ve seen similar sentiments at most expressed as vague concerns on the EA Forum.
The EA Forum and other channels like the ‘Effective Altruism’ Facebook group appear dominated by fundraisers and commentary on and from metacharities because those are literally some of the only appropriate outlets for metacharities to fundraise or to publish transparency reports. Indeed, that they’re posting material besides fundraisers beyond their own website is a good sign, as it’s the sort of transparency and peer review the movement at large would demand of metacharities. Nonetheless, between this and constant chatter about metacharities on social media, I can see how the perception most donations are indirect and go to metacharities arises. However, this may be illusory. The 2015 EA Survey, the latest date for which results are available, show effective altruists overwhelmingly donate to Givewell’s recommended charities. Data isn’t available on the amounts of money self-identified effective altruists are moving to each of these given charities. So, it’s possible lots of effective altruists earning to give are making primarily indirect donations. However, anecdotally, this doesn’t seem to be the case. If one wants to make that case, and then mount a criticism based on it, one must substantiate it with evidence.
Anonymous #20: