Great guide! Thanks for sharing. I’d like to make a suggestion, which I’m not sure is applicable to your organizations. But some areas can pose risks for certain types of attendees due to local laws. For example, laws that criminalize trans people using their preferred bathrooms, or laws that increase deportation risks for undocumented people. So, that’s something to be aware of maybe. Having event planners, if they’re deciding where to host an event, look into local laws.
Tiresias
Hm maybe, I’m not sure. I like to have a professional atmosphere, and public sharing of misdeeds can lead to a culture of like gossip. But, I think it is appropriate to speak publicly about it if the situation was mishandled (in my case, unclear as it’s been reopened) or if the person should be blacklisted (I do not think this is the case here).
Lol not you. I deleted most detail I included in that comment because I feel like it’s distracting from SBF discussion (like, this convo should not be used as a soapbox for me), the case has recently been reopen (which means probably best if I don’t talk about it and also there might be a good outcome). And I also just worry about pissing people off.
It’s also like, what are people supposed to do with an anonymous comment with a very vague allegation.
Agree. And also worth noting it seems like he may have never actually been that rich, but just, you know, lied and did fraud.
The general thing I’m hearing is, with a lot of people who do misconduct, you/CEA will hear about this misconduct relatively early on, and they should take action before things get too large to correct. That, early & decisive action is important. Leadership should be taking a lot more initiative in responding to misconduct.
This tracks with my experience too. I’ve reported professional misconduct, have it not be taken seriously, and have that person continue to gain power. The whole experience was maddening. So, yeah, +1 to early intervention following credible misconduct reports.
It really must feel awful to report serious misconduct and have it not be taken seriously. I’ve had a similar experience and it crushed me mentally.
I’ve been thinking about this situation a lot. I don’t know many details, but I’m trying to sort through what I think EA leadership should have done differently.
My main thought is, maybe in light of these concerns, they should have kept taking his money, but not tied themselves to him as much. But I don’t know many details about how they tied themselves to him. Its just, handling misconduct cases gets complicated when the misconduct is against one of the 100 richest people in the world. And while it’s clear Sam treated people poorly, broke many important rules and lied frequently, it was not clear he was stealing money from customers. And so it just leaves me confused. But thank God I am not in charge of handling these sorts of things.
I know it’s also not your responsibility to know what to do in situations like this, but I’d be curious to hear what you wished EA leadership/infrastructure had done differently. I think that might help give shape to my thoughts around this situation.
I don’t known if communicating super clearly here. So want to clarify. This is not meant as a critical comment at all! I hope it doesn’t read as downplaying your experience, because I do feel super alarmed about everything and get the sense EA fucked up big here. I feel fully on support of you here, but I’m worried my confusion makes that harder to read.
Retracting because on reflection I’m like, no one knew he was stealing funds, but I think leadership knew enough of the ingredients to not be surprised by this. It’s not just Sam treating employees poorly, but leadership heard that he would lie to people (including investors), mix funds, play fast and loose with the rules. They may not have known the disastrous ways these would combine. Even so, it seems super bad and while I’m still confused as to how the ideal way to handle it would be. It does seem clear to me it was egriously mishandled.
I’m not sure about this suggestion, but I wonder if as part of the EAG survey, it might ask if you had an uncomfortable experience with someone.
I know someone who at this EAG, had an uncomfortable experience, but on the minor end of the spectrum. I don’t think they considered reporting it to CH at CEA until they heard that someone else independent brought up that they had an uncomfortable experience with the same person at that EAG. On its own, an incredibly minor experience that would seem excessive to bring up to CEA. But hearing it as a pattern of behavior made it more concerning.
So given that reporting to CEA can feel too serious for many offenses, maybe filling it in a survey would be a place people could report more minor experiences like this?
I think another barrier to reporting minor incidences is the potential reportee wouldn’t want too serious of sanctions to he taken against the person. A lot may just want someone in a position of authority to say “hey, you may not realize, but you’re making people uncomfortable”
I’m a little worried that instilling this policy would lead to a hostile atmosphere or something where fingers are pointed at each other. But maybe worth testing this at an EAGx or something? Not sure.
Hm, I think I did not communicate my concern clearly. The concern I have is not with the CH lead sympathizing with the text of the post. At least in a personal capacity, I agree that the text of the post adds to the conversation is a useful way. I also understand that women are not all in agreement on these points.
The concern I have is the implicit endorsement of the meme shared at the top of the post. Not the text of the post. It’s one thing for community members to share these sorts of trivializing memes that mock the position they disagree with. But when the CH lead shares a post that opens with that meme, I wonder, is that how you see that side of the conversation?
I’m not saying the meme at the top of the post means you can’t link to it, quote it, reference it. But I’d want some caveat.
Maybe my broader point is this is clearly an emotionally intense topic, that touches on many’s personal experiences. And as CH lead, many people are looking to you right now, and a lot is riding on it. You have substantial power, and individually us as community members have much less so. So it feels important, at least to me, that I feel this issue is being handled sensitively, with a lot of empathy and understanding. It’s fine for you to express sympathy for different positions, and I think it’s valuable to transparently see where you are at. But many people have considered this meme to be trivializing and mocking. I think generally memes making fun of an argument they’re disagreeing with will have that effect. So seeing you tacitly endorse mocking/trivializing content is upsetting.
The meme is absolutely trivializing. It is mocking the opposite side of the discussion. The text of the post is not trivializing. Agreeing with the meme does not mean the meme is not trivializing.
One thing I struggle with in discourse, is expressing agreement. Agreeing seems less generative, since I often don’t have much more to say than “I agree with this and think you explain it well.” I strongly agree with this post, and am very happy you made it. I have some questions/minor points of disagreement, but want to focus on what I agree with before I get to that, since I overwhelmingly agree and don’t want to detract from your point.
The sentiment “we are smarter than everyone and therefore we distrust non-EA sources” seems pervasive in EA. I love a lot about EA, I am a highly engaged member. But that sentiment is one of the worst parts about EA (if not the worst). I believe it is highly destructive to our ability to achieve our aims of doing good effectively.
Some sub-communities within EA seem to do better at this than others. That being said, I think every element of EA engages in this kind of thinking to some extent. I don’t know if I’ve ever met any EA who didn’t think it on some level. I definitely have a stream of this within me.
But, there is a much softer, more reasonable version of that sentiment. Something like “EA has an edge in some domains, but other groups also have worthwhile contributions.” And I’ve met plenty of EAs who operate much more on this more reasonable line than the excessively superior sentiment described above. Still, it’s easy to slip into the excessively superior sentiment and I think we should be vigilant to avoid it.
------
Onto some more critical questions/thoughts.
My previous epistemics used to center around “expert consensus.” The COVID-19 pandemic changed that. Expert consensus seemed to frequently be wrong, and I ended up relying much more on individuals with a proven track record, like Zeynep Tufekci. I’m still not sure what my epistemics are, but I’ve moved towards a forecasting based model. Where I most trust people with a proven track record of getting things right, rather than experts. But it’s hard to find people with this proven track record, so I almost always still default to trusting experts. I certainly don’t think forum/blog posts fit into this “proven track record” category, unless it’s the blog of someone with a proven track record. But “proven track record” is still a very high standard. Zeynep is literally the only person I know who fits the bill. My worry with people using a “forecaster > expert” model is they won’t have a high enough standard for what qualifies someone as a trust worthy forecaster. And it’s not like I trust her on everything. I’m wondering what your thoughts are on a forecaster model.
Another question I have is that the slowness of peer-review does strike me as a legitimate issue. But I am not in the AI field at all so I have very little knowledge. I still would like to see AI researchers make more efforts to get their work peer-reviewed, but I’m wondering if there might be some dual system, where less time sensitive reports get peer reviewed and are treated with a high-level of trust, and more time-sensitive reports do not go through as rigorous of a process, but are still shared, albeit with a lower level of trust. I’m really not sure, but some sort of dual system seems necessary to me. It can’t be we totally disregard all non peer-reviewed work?
Yeah, I strongly agree and endorse Michael’s post, but this line you’re drawing out is also where I struggle. Michael has made better progress on teasing out the boundaries of this line than I have, but I’m still unclear. Clearly there are cases where conventional wisdom is wrong—EA is predicated on these cases existing.
Michael is saying on questions of philosophy, we should not accept conventional wisdom, but on questions of sociology, we should. I agree with you that the distinction between sociological and philosophical are not quite clear. I think you’re example of “what should you do with your life” is a good example of where the boundaries blur.
Maybe, I think “sociological” is not quite the right framing, but something along the lines of “good governance.” The peer review point Michael brings up doesn’t fit into the dynamic. Even though I agree with him, I think “how much should I trust peer review” is an epistemic question, and epistemics does fall into the category where Michael thinks EAs might have an edge over conventional wisdom. That being said, even if I thought there was reason to distrust conventional wisdom on this point, I would still trust professional epistemic philosophers over the average EA here and I would find it hard to believe that professional epistemic philosophers think forums/blogs are more reliable than peer reviewed journals.
Yeah, I’m not sure that people prioritizing the Forum over journal articles is a majority view, but it is definitely something that happens, and there are currents in EA that encourage this sort of thinking.
I’m not saying we should not be somewhat skeptical of journal articles. There are huge problems in the peer-review world. But forum/blogs posts, what your friends say, are not more reliable. And it is concerning that some elements of EA culture encourage you to think that they are.
Evidence for my claim, based on replies to some of Ineffective Altruism’s tweets (who makes a similar critique).
1: https://twitter.com/IneffectiveAlt4/status/1630853478053560321?s=20 Look at replies in this thread
2: https://twitter.com/NathanpmYoung/status/1630637375205576704?s=20 Look at all the various replies in this thread
(If it is inappropriate for me to link to people’s Twitter replies in a critical way, let me know. I feel a little uncomfortable doing this, because my point is not to name and shame any particular person. But I’m doing it because it seems worth pushing back against the claim that “this doesn’t happen here.” I do not want to post a name-blurred screenshot because I think all replies in the thread are valuable information, not just the replies I share, so I want to enable people to click through.)
“I’ve seen this debate play out many times online, but empirically, it seems to me like EA-ish orgs with a lot of hiring power (large budgets, strong brands) are more likely than other EA-ish orgs to hire people with strong track records and relevant experience.”
Based on speaking to people at EA orgs (and looking at the orgs’ staff lists), I disagree with this. When I have spoken to employees at CEA and Open Phil, the people I’ve spoken to have either (a) expressed frustration about how focused their org is on hiring EA people for roles that seem to not need it or (b) defended hiring EAs for roles that seem to not need it. (I’m talking about roles in ops, personal assistants, events, finance, etc.)
Maybe I agree with your claim that large EA orgs hire more “diversely” than small EA orgs, but what I read as your implication (large EA orgs do not prioritize value-alignment over experience), I disagree with. I read this as your implication since the point you’re responding to isn’t focusing on large vs. small orgs.
I could point to specific teams/roles at these orgs which are held by EAs even though they don’t seem like they obviously need to be held by EAs. But that feels a little mean and targeted, like I’m implying those people are not good for their jobs or something (which is not my intent for any specific person). And I think there are cases for wanting value-alignment in non-obvious roles, but the question is whether the tradeoff in experience is worth it.
Can you share more on your view as to the distinction between “the purpose of my personal life is not to improve the health of EA” versus “I have a personal responsibility to not conduct my personal life in a way that systematically/repeatedly harms the EA community.” The first sentence, at least in a very literal reading, seems non-controversial to me. The second sentence seems to be where people run into trouble and disagreement, and what proposals like this once are trying to enact. Are you also disagreeing with this second statement? If so, could you share more? Do people generally agree with the second statement, but the disagreement comes from what sorts of behaviors are “systematically harmful”?
Or is there a third position that I’m missing?
I also want to note that, while I don’t think this is severe or anything, I find it concerning that a community health representative is ostensibly endorsing a post which contains an image/meme trivializing a discussion around what norms to set to make EA a less destructive space for women. Not that you can’t endorse any ideas expressed in the post, but without a caveat it makes me wonder, does the community health team share that trivializing view towards these discussions?
(This might be too harsh. But sharing my gut level reaction)
[Deleting the earlier part of my comment because it involved an anonymized allegation of misconduct I made, that upon reflection, I feel uncomfortable making public.]
I also want to state, in response to Ivy’s comment, that I am a woman in EA who has been demoralized by my experience of casual sexism within it. I’ve not experience sexual harassment. But the way the Bloomberg piece describes the way EA/rats talk about women feels very familiar to me (as someone who interacts only with EAs and not rats). E.g., “5 year old in a hot 20 year old’s body” or introducing a woman as “ratbait.”
Great! Might reach out in a few weeks (when I plan to dive into bio resources)
We can maybe think through this with real life examples. If you have a friend who has a minor crush on someone they met once/twice, and you don’t know that person, what is the primary thing you think about them? What category do you put them in?
Agree! The recipes I’ve made from them have been consistently very good.
I love cooking vegan or veganized cuisines from other cultures!
Chez Jorge makes vegan Taiwanese food, and everything I’ve tried from him is a banger: https://instagram.com/chez.jorge?igshid=YmMyMTA2M2Y=
One of my favorite soup/stew bases is coconut milk, tomato paste, peanut butter. I believe this is a West African flavor combo. You can google West African peanut soup if you want a recipe (not all have coconut milk but I strongly recommend).
Made with Lau is not a vegan channel but they have a lot of vegan recipes and many are easily veganizable (they will even often state how to veganize). Its a a channel where a child documents his fathers Cantonese Chinese recipes (who also worked in an American Chinese restaurant). https://youtu.be/lWJpa0MRHAs
I have never made better Thai food than the Thai food I make following Pailin’s Kitchen. She’s not vegan but has many veganizable recipes. Just amazing. The stuff tastes like it came from a Thai restaurant. https://youtu.be/miLLiyXd1Bc
My favorite cuisine to make is probably Korean. I don’t have any specific people I follow, but I flipped through the Korean Vegan Cookbook by Joanne Lee Molinaro and it seems very good. My preferred dish is tteokbokki but I haven’t found a good solo recipe for it and will usually combine a few.
I’m not that knowledgeable on the biosecurity field, but from afar I’ve thought that working biosecurity EAs tend to be some of the more collaborative, professional, and respectful of non-EA contributions. And this post is an example of that, thanks for sharing.
Do you have any additional sources you’d recommend? Things you didnt add to the initial list for whatever reason? I’d like to read widely on this topic.
Also, OFTW’s harassment policy seemed to focus on harassment related to protected characteristics, like gender, race, etc. But what about harassment for other reasons? Eg, someone harassing/threatening/intimidating someone who rejected their grant application. GWWC’s harassment policy was more general, and I would assume it would cover this type of harassment, though it was not made explicit.