In no community I was ever part of before have I had to tell newcomers “beware, this community is plagued by sexism, racism and abuse”. That I have to to have to tell this to people I introduce to EA is really absurd.
The post mentioned “attack on EA”. The main attack on EA I see are the people causing issues around sexism and/or racism. I think they significantly slow down our productivity and ability to do good in the world. Why should women in EA have to spend time thinking about how to protect themselves from abuse? I have never been in a place where this is the modus operandi for women.
I am growing increasingly concerned that the people supposedly working to protect us from unaligned AI have such weak ethics. I am wondering if a case can be made for it being better to have a small group of high integrity people work on AI safety than to have even a twice as large group comprised 50% of low-integrity individuals. I wouldn’t want a bank robber to safeguard democracy, for example.
If you think you have to retort to some “red pilling” method or similar to be sexually active, you are so lost. If you feel yourself heading down that path please stop immediately and work on e.g. getting a male mentor. Someone sexually active and well regarded by the women he is around. Actually, some of my best mentors around sexuality has been my female friends. I really recommend men foster deep, meaningful friendships with heterosexual women. When they tell you about their dating experiences you will very quickly understand how to behave around women you are interested in sexually.
I don’t think it’s true that EA is plagued by sexism, racism and abuse, or that women need to be more vigilant about protecting themselves from sexual abuse in EA than in the wider community. And I don’t think the info in the post indicates this is true.
My main takeaway from the post and from Lucretia’s experience is that male EAs, especially researcher-types in SF who lack worldly experience, should get training around sexual assault in order to better identify bad actors when they do appear, and prevent them from causing harm (rather than accidentally supporting them (!)), and to just generally be halfway-decent allies.
But this is very different from the picture you paint, a picture that I worry could result in a greater gender imbalance in EA, by inaccurately putting off women who are considering getting involved.
Personally I find myself worrying much less about sexism, abuse or physical aggression from male EAs than I do from men more broadly.
I do think “EA is plagued with sexism, racism, and abuse” is a very very granular first approximation for what’s actually going on.
A better, second approximation may look like this description of “the confluence”:
“The broad community I speak of here are insular, interconnected subgroups that are involved most of these categories: tech, EA (Effective Altruists), rationalists, Burning Man camps, secret parties, and coliving houses. I’ve heard it referred to as a “clusterf**k” and “confluence”, I usually call it a community. The community is centered in the San Francisco bay area but branch out into Berlin, London/Oxford, Seattle, and New York. The group is purpose-driven, with strong allegiance to non-mainstream morals and ideas about shaping the future of society and humanity” (Source)
There is probably an even better third approximation out there.
I do think that these toxic dynamics largely got tied to EA because EA is the most coherent subculture that overlaps with “the confluence.” Plus, EA was in the news cycle, which made journalists incentivized to write articles about it, especially when “SBF” and “FTX” get picked up by search engines and recommender systems. EA is a convenient tag word for a different (but overlapping) community that is far more sinister.
As I wrote in my response above, I’m mainly sad that my experience of EA was through the this distorted lens. It also seems clear to me that there are large swathes (perhaps the majority?) of EA that are healthy and well-meaning, and I am happy this has been your experience!
One of my motives for writing this post was giving people a better “second approximation” than EA itself being the problem. I do believe people put too much blame on EA, and one could perhaps make the argument that more responsibility could be put on surrounding AI companies, such as OpenAI/Anthropic, some of whose employees may be involved in these dynamics through the hacker house scene.
I’d be interested in hearing more about the bolded sentence from a conceptual standpoint.
As a general rule, US society does not expect employers to investigate and discipline their employees’ off-duty conduct, even criminal conduct. We do expect the employer to respond when there is a sufficient nexus between the conduct and equal employment opportunity at the employer (e.g., off-work sexual harassment of a coworker). The employer generally can discipline for off-duty conduct, subject to any limitations in an employment contract or collective bargaining agreement. But usually we don’t expect the employer to do so.[1]
I’m not sure how we get to the position that OpenAI and Antropic have a duty to investigate and adjudicate here without extending the same duty to (e.g.) the local grocery store and to at least other forms of serious criminal conduct. And I don’t think a world in which the store does have that obligation is a world I would favor on net.
But I could be misunderstanding the bolded sentence, or there may be a sufficient nexus to place a duty on OpenAI/Antrophic while maintaining some sort of limiting principle on employer scope of responsibility.
There are exceptions, to be sure. For instance, some sorts of conduct implicate fitness to hold certain roles (e.g., a professional truck driver who drives drunk off the clock, someone with significant discretionary authority over personnel matters who engages in racist conduct).
Yeah, this is interesting. I would invoke some of the content from Citadels of Pride here, where we can draw an analogy between Silicon Valley and Hollywood.
I would argue that hacker houses arebeing used as professional grounds. There is such an extent of AI-related networking, job-searching, brainstorming, ideating, startup founding, angel investing, and hackathoning that one could make an argument that hacker houses are an extension of an office. Sometimes, hacker houses literally are the offices of early stage startups. This also relates to Silicon Valley startup culture’s lack of distinction between work and life.
This puts a vulnerable person trying to break into AI in a precarious position. Being in these environments becomes somewhat necessary to break in; however, one has none of the legal protections of “official” networking environments, including HR departments for sexual harassment. The upside for an aspirant could be a research position at her dream AI company through a connection she makes here. The downside could be getting drugged and raped if her new “acquaintance” decides to put LSD in her water.
Hacker houses would then give the AI company’s employees informal networking grounds to conduct AI-related practices while the companies derisk themselves from liability.Which makes this a very different situation from criminal activity at the local grocery score.
I think there may be two potential, interrelated theories of responsibility here—that the hacker houses are “professional grounds” more generally, and that they are being used to conduct the specific company’s business.
I am more inclined to put investigative/adjudicative responsibility on the employer where the environment was used by one of the employer’s agents to promote the employer’s objectives, and either the employer-specific influence was pervasive or there was a sufficient nexus between the employer-promoting activity and the misconduct. To illustrate the two possible poles here:
Sam works for SmallAI. He attends a party at Hacker House. Neither Bob nor anyone else from SmallAI has ever conducted even informal recruiting or other activities to promote SmallAI’s specific interests at Hacker House. Sam assaults another attendee.
Bob works for BigAI. BigAI encourages informal recruitment, and Bob has been informally recruiting someone. At a Hacker House party, he assaults that person.
Although Hacker House may be professional grounds, it does not appear to be SmallAI’s professional grounds. Sam’s interactions are solely in his personal capacity. Unless there’s more, I don’t see any action (or acquiescence) by SmallAI here that I could use to justify tagging it with responsibility to investigate and adjudicate things that happen at Hacker House.
In contrast, BigAI has a responsibility to investigate and adjudicate Bob’s actions. It acquiesced to, if not encouraged, Bob’s informal recruitment activities and stood to benefit from them. There was a strong connection between the recruitment activities and the assault (i.e., both involved the same person).
And of course, there are a hundred other scenarios between those poles, which is where things get complicated for me.
I am sure there are cleaner cases, like your “Bob works for BigAI” example, where taking legal action, and amplifying in media, could produce a Streisand effect that gives cultural awareness to the more ambiguous cases. Some comments:
Silicon Valley is one “big, borderless workplace”
Silicon Valley is unique in that it’s one “big, borderless workplace” (quoting Nussbaum). As she puts it:
Even if you are not currently employed by Harvey Weinstein or seeking employment within his production company, in a very real sense you always are seeking employment and you don’t know when you will need the good favor of a person of such wealth, power, and ubiquitous influence. (Source.)
Therefore, policing along clean company lines becomes complicated really fast. Even if Bob isn’t directly recruiting for BigAI (but works for BigAI), being in Bob’s favor could improve your chances of working at to SmallAI, which Bob has invested in.
The “borderless workplace” nature of Silicon Valley, where company lines are somewhat illusory, and high-trust social networks are what really matter, is Silicon Valley’s magic and function. But when it comes to policing bad behavior, it is Silicon Valley’s downfall.
An example that’s close to scenarios that I’ve seen
Alice is an engineer at SmallAI who lives at a SF hacker house with her roommates Bob, Chad, and Dave, who all work for BigAI. The hacker house is, at first, awesome because Bob, Chad, and Dave frequently bring in their brilliant industry friends. Alice gets to talk about AI everyday and build a strong industry network. However, there are some problems.
Chad is very into Alice and comes into her room often. Alice has tried to set a firm boundary, but Chad not picking up on it, whether intentionally or not. Alice starts getting paranoid and is very careful to lock her room at night.
Alice does not want to alienate Chad, who has a lot of relationships in the industry. She feels like she’s already been quite firm. She feels like she cannot tell Bob or Dave, who are like Chad’s brothers. She’s afraid that Bob or Dave may think she’s making a big deal over nothing.
Alice’s friend Bertha, who is an engineer at MidAI, has stopped coming to the hacker house because, as she tells Alice, she finds their parties to be creepy. At the last party, the hacker house hosted a wealthy venture capitalist from AI MegaInvestments who appeared to be making moves on a nineteen-year-old female intern at LittleAI. When Bertha tried to say something, Bob and Chad seemed to get really annoyed. Who does Bertha think she is, this random MidAI employee? The venture capitalist might invest in their spin-off from BigAI one day! Bertha stops coming to the hacker house, and her network slightly weakens.
Alice also debates distancing herself from her house. She secretly agrees with Bertha, and she’s finding Chad increasingly creepy. However, she’s forged such an incredible network of AI researchers at their parties—it all must be worth it, right? Maybe one day she’ll also transition into BigAI, because she now knows a ton of people there.
One day, Alice’s other ML engineer friend, Charlotte, visits their hacker house. She’s talking a lot to Bob, and it looks like they’re having a good time. Alice does not hear from Charlotte for awhile, and she doesn’t think much of it.
Six months later, Charlotte contacts Alice to say that Bob brought her to his bedroom and assaulted her. Charlotte has left Silicon Valley because of traumatic associations. Alice is shocked and does not know what to do. She doesn’t want to confront or alienate Bob. Bob had made some comments that Charlotte had been acting kind of “hysterical.” And what if Charlotte is lying? She and Charlotte aren’t that close anyway.
Nevertheless, Alice starts to feel increasingly uncomfortable at her hacker house and eventually also leaves.
As you can see, Alice’s hacker house is now a clusterf*ck. Alice, Bertha, and Charlotte have effectively been driven from the industry due to cultural problems, while Bob, Chad, and Dave’s networks continue to strengthen. This scenario happens all the time.
Proposal
I propose that the high-status companies and VC firms in Silicon Valley (e.g. OpenAI, Anthropic, Sequoia, etc) could make more explicit that they are aware of Silicon Valley’s “big, borderless workplace” nature. Sexual harassment at industry-related hacker houses, co-working spaces, and events, even when not on direct company grounds, reflects the company to some extent, and it is not acceptable.
While I don’t believe these statements will deter the most severe offenders, pressure from institutions/companies could weaken the prevalent bystander culture, which currently allows these perpetrators to continue harassing/assaulting.
Thank you for explaining the “Big borderless workspace” concept. This is the first time I have seen a reasonable-looking argument in favour of company policies restricting employees’ actions outside work, something which I had previously seen as a pure cultural-imperialist power grab by oppressive bosses.
I think this framing is helpful: one could push for companies to take a stance as described in your proposal, and publicize whether or not they had done so. Good talent has options, and hopefully a decent fraction of that talent would prefer not to work for a company that wasn’t doing its part in addressing sexual harassment in the subculture.
The details would be tricky—a statement of disapproval would probably not accomplish much without some sort of commitment to enforcement action. In other words, I think the effectiveness of AI Corp.’s statement is contingent there being a policy or practice with some teeth / consequences behind it. Otherwise, it seems pretty performative.
I started writing a list of concerns AI Corp. might have with such a proposal, in an attempt to fashion it in a way that maximizes the possibility of getting at least one major AI firm to agree to it—and thus pressuring the others to follow suit. But I decided that might be responding to a version of the proposal that might not be what you had in mind.
I think the key design elements would include whether there was an enforcement mechanism, and if so what it would be triggered by.
One of the challenges here, I think, would be delineating differences (if any) in what is acceptable (or at least actionable) in the workplace / with co-workers vs. in the hacker-house subculture vs. in the broader world. I consider my views pretty strict on harassment/EEO matters in the workplace. But part of the reason I’m willing to make employees potentially walk on eggshells there is that workplace harassment/EEO law generally only applies at the workplace, with co-workers, and in other circumstances with a clear nexus to employment. The risk of workplace harassment/EEO policies suppressing acceptable sexual behavior is not as great a concern to me because the policies cover only a slice of the individual’s life, leaving lots of opportunities for sexual expression off the job. To the extent that a policy is going to cover the employee’s primary subculture, it is going to affect much more of the employee’s life, and the risk of a chilling effect on acceptable sexual behavior seems potentially more relevant.[1]
It’s certainly possible that I would reach similiar results about what is acceptable / actionable in the subculture as someone whose view of what was acceptable / actionable in the workplace was less strict than mine, but applied much the same standard to the subculture as to the workplace.
There are exceptions, to be sure. For instance, some sorts of conduct implicate fitness to hold certain roles (e.g., a professional truck driver who drives drunk off the clock, someone with significant discretionary authority over personnel matters who engages in racist conduct).
When do these exceptions apply? They may here, if the same people who showed such poor judgement in other contexts also have decision-making power over high-leverage systems.
[Caveat: I’m reporting what I perceive as the social norms—I would not personally want low-character or poor-judgment people working for me.]
As I see it, the societal expectation I referenced as an example generally kicks in when the connection between the specific off-work conduct and the person’s job duties is particularly strong and there is a meaningful risk of harm to people other than the corporation. I don’t think there is the kind of direct connection as in my examples for the median technical AI role here.
I think society’s expectations about addressing off-duty conduct increase as you go up the corporate food chain—they would be different for a senior executive than for a non-supervisory engineer. So I think society would generally expect mid-size+ corporations to investigate and adjudicate substantial claims that their CEO committed a sexual assault after being hired and while off-duty, even absent any connection to corporate activity.
This is mostly a rehash of things already said by others, but my read is still that the version of that statement that has ‘SFBA’ instead of ‘EA’ in it is the only thing resembling a first approximation, and EA would only appear from a 2nd approximation onwards. E.g. to my knowledge I don’t know anyone who lives in a hacker house, and I’d never heard of the phenomenon before the TIME article.
In general I’m in favour of warning people about (even potentially) bad actors/groups and toxic cultish behaviour, and have done so previously. I just don’t see how it isn’t counterproductive to tell women that a movement is “plagued” by something appears to centre on a city where <7% of people who identify with the movement live (based on the 2020 EA survey). [I take your point that this toxic group of people has branched beyond SF, but it still seems very much centred there].
Based on demographics alone, I’d predict lower rates in broader American society (based on males being much more likely to commit assault, and the strong trend toward desistance from crime by the time a person turns 35 or so).
Intuitively that doesn’t seem like the right base rate to me, even if the reference class is the whole of society? If the average woman considering getting involved in EA is in her early to mid twenties (e.g. the average female EAG attendee was 28 I believe), I would guess that the average age of the men she interacts with would be much lower than the population average? Especially if she is a student.
In terms of the reference class I had in mind, it was something like, ’for a given cluster of EAs that are attached to another subculture, EAs would have on average less sexism and abuse than that subculture within. So e.g. EAs within the tech scene, EAs within the Burning Man scene, within various academic scenes, etc. Interested in your thoughts on that.
I don’t have the data to speculate—that’s why we need robust data collection.
The comment that started this discussion was:
In no community I was ever part of before have I had to tell newcomers “beware, this community is plagued by sexism, racism and abuse”. That I have to to have to tell this to people I introduce to EA is really absurd.
From the original commenter’s perspective, he would likely advise his friends in comparison to age-matched society as a whole (not adjusting for gender imbalance in EA, not adjusting for EAs being attached to high-sexism/abuse subcultures.”
The commenter’s statement isn’t inconsistent with the hypothesis that (e.g.) EAs within the tech scene display less sexism and abuse than people in the tech scene as a whole. It’s plausible that EAs tend to be “attached to . . . subculture[s]” that have very high rates of sexism and abuse relative to age-matched members of society as whole. There could be lower rates of sexism and abuse among EAs in those subcultures than among other subculture members . . . but still high compared to age-matched society as a whole.
Hi Rebecca I do take seriously your points especially as you have first hand experience navigating EA as a woman. And I really do want you to be correct. However, i can’t shake the feeling that EA is not a safe place for women, especially in the bay area/in ai safety. I feel left with 2 choices: warn women or just don’t encourage women to engage with ea. I believe the former is better as they can then choose and I placed more agency with the women i talk with.
Also, i have heard too many first hand accounts of really sub standard behaviour and have even once been subject to inappropriate comments about body size from strangers at an ea social. I am about 80% confident we have a major issue with inappropriate behaviour in EA. If there was an alternative community just without the bad behaviour I would switch in an instant. But because there is not i am determined to do what I can to fix things.
And I know I’m not responding directly to the post, but I feel really upset about all these posts on abuse and aggressions and maybe I am just venting. But hopefully I am also showing others who feel like me that they are not alone and maybe hopefully building some marginal amount of momentum towards change.
Thanks! I felt pretty bad and left the event but compared to what other people go through in EA it was peanuts. But it makes me update towards thinking there is something that is holding the EA movement back as too many people have these negative experiences—when my own experience to some degree match that of the way too many posts here on the EAF about inappropriate behavior and the toll that takes on people who are subject to it.
Thank you for this comment. You’ve made some things explicit that I’ve been thinking about for a long time. It feels analogous to saying the emperor has no clothes.
I am growing increasingly concerned that the people supposedly working to protect us from unaligned AI have such weak ethics. I am wondering if a case can be made for it being better to have a small group of high integrity people work on AI safety than to have even a twice as large group comprised 50% of low-integrity individuals. I wouldn’t want a bank robber to safeguard democracy, for example.
The idea of having fewer AI alignment researchers, but those researchers having more intensive ethical training, is compelling.
Actually, some of my best mentors around sexuality has been my female friends. I really recommend men foster deep, meaningful friendships with heterosexual women. When they tell you about their dating experiences you will very quickly understand how to behave around women you are interested in sexually.
There is currently a huge vacuum in mentorship for men about how to interact with women (hence the previously burgeoning market of red pill, dating coaches, Jordan Peterson, etc). More thought leadership by men who have healthy relationships with women would be a service to civilization. Maybe you should write some blog posts :).
Thanks for taking time to respond so thoughtfully Lucrectia! I am considering many things to improve things especially in EA: -Make a reading group on allyship happen, with a focus on EA (to anyone reading this: please let me know if you are a self identified man and want to be part of this!) -Try to find a way to talk to and understand the men who have conflicted feelings about gender equality etc. (to anyone who might read this: please let me know if you would like to talk—I understand trust can be an issue but I think we can work through that) -Write posts e.g. here on the EAF, but I am unsure of the balance between taking a stance and taking up too much space -Do my share of reproductive work at home with a wife who is a successful and ambitious academic and with 2 small kids—I feel like this should be my first priority!
This is a great list! I think this one is extremely valuable and something that men may be better equipped to do than I would:
Try to find a way to talk to and understand the men who have conflicted feelings about gender equality etc. (to anyone who might read this: please let me know if you would like to talk—I understand trust can be an issue but I think we can work through that)
I’d love to write another post about this too, targeted at men who have conflicted feeling about gender equality, sexual violence, etc. The problem with this current post is it may be preaching to the choir :) Someone (probably me) needs to shill AI Twitter with these ideas, but rebranded to the average mid-twenties male AI researcher. “Fighting bad actors in AI” has been one message I’ve been playing with.
A few points:
In no community I was ever part of before have I had to tell newcomers “beware, this community is plagued by sexism, racism and abuse”. That I have to to have to tell this to people I introduce to EA is really absurd.
The post mentioned “attack on EA”. The main attack on EA I see are the people causing issues around sexism and/or racism. I think they significantly slow down our productivity and ability to do good in the world. Why should women in EA have to spend time thinking about how to protect themselves from abuse? I have never been in a place where this is the modus operandi for women.
I am growing increasingly concerned that the people supposedly working to protect us from unaligned AI have such weak ethics. I am wondering if a case can be made for it being better to have a small group of high integrity people work on AI safety than to have even a twice as large group comprised 50% of low-integrity individuals. I wouldn’t want a bank robber to safeguard democracy, for example.
If you think you have to retort to some “red pilling” method or similar to be sexually active, you are so lost. If you feel yourself heading down that path please stop immediately and work on e.g. getting a male mentor. Someone sexually active and well regarded by the women he is around. Actually, some of my best mentors around sexuality has been my female friends. I really recommend men foster deep, meaningful friendships with heterosexual women. When they tell you about their dating experiences you will very quickly understand how to behave around women you are interested in sexually.
I don’t think it’s true that EA is plagued by sexism, racism and abuse, or that women need to be more vigilant about protecting themselves from sexual abuse in EA than in the wider community. And I don’t think the info in the post indicates this is true.
My main takeaway from the post and from Lucretia’s experience is that male EAs, especially researcher-types in SF who lack worldly experience, should get training around sexual assault in order to better identify bad actors when they do appear, and prevent them from causing harm (rather than accidentally supporting them (!)), and to just generally be halfway-decent allies.
But this is very different from the picture you paint, a picture that I worry could result in a greater gender imbalance in EA, by inaccurately putting off women who are considering getting involved.
Personally I find myself worrying much less about sexism, abuse or physical aggression from male EAs than I do from men more broadly.
I do think “EA is plagued with sexism, racism, and abuse” is a very very granular first approximation for what’s actually going on.
A better, second approximation may look like this description of “the confluence”:
There is probably an even better third approximation out there.
I do think that these toxic dynamics largely got tied to EA because EA is the most coherent subculture that overlaps with “the confluence.” Plus, EA was in the news cycle, which made journalists incentivized to write articles about it, especially when “SBF” and “FTX” get picked up by search engines and recommender systems. EA is a convenient tag word for a different (but overlapping) community that is far more sinister.
As I wrote in my response above, I’m mainly sad that my experience of EA was through the this distorted lens. It also seems clear to me that there are large swathes (perhaps the majority?) of EA that are healthy and well-meaning, and I am happy this has been your experience!
One of my motives for writing this post was giving people a better “second approximation” than EA itself being the problem. I do believe people put too much blame on EA, and one could perhaps make the argument that more responsibility could be put on surrounding AI companies, such as OpenAI/Anthropic, some of whose employees may be involved in these dynamics through the hacker house scene.
I’d be interested in hearing more about the bolded sentence from a conceptual standpoint.
As a general rule, US society does not expect employers to investigate and discipline their employees’ off-duty conduct, even criminal conduct. We do expect the employer to respond when there is a sufficient nexus between the conduct and equal employment opportunity at the employer (e.g., off-work sexual harassment of a coworker). The employer generally can discipline for off-duty conduct, subject to any limitations in an employment contract or collective bargaining agreement. But usually we don’t expect the employer to do so.[1]
I’m not sure how we get to the position that OpenAI and Antropic have a duty to investigate and adjudicate here without extending the same duty to (e.g.) the local grocery store and to at least other forms of serious criminal conduct. And I don’t think a world in which the store does have that obligation is a world I would favor on net.
But I could be misunderstanding the bolded sentence, or there may be a sufficient nexus to place a duty on OpenAI/Antrophic while maintaining some sort of limiting principle on employer scope of responsibility.
There are exceptions, to be sure. For instance, some sorts of conduct implicate fitness to hold certain roles (e.g., a professional truck driver who drives drunk off the clock, someone with significant discretionary authority over personnel matters who engages in racist conduct).
Yeah, this is interesting. I would invoke some of the content from Citadels of Pride here, where we can draw an analogy between Silicon Valley and Hollywood.
I would argue that hacker houses are being used as professional grounds. There is such an extent of AI-related networking, job-searching, brainstorming, ideating, startup founding, angel investing, and hackathoning that one could make an argument that hacker houses are an extension of an office. Sometimes, hacker houses literally are the offices of early stage startups. This also relates to Silicon Valley startup culture’s lack of distinction between work and life.
This puts a vulnerable person trying to break into AI in a precarious position. Being in these environments becomes somewhat necessary to break in; however, one has none of the legal protections of “official” networking environments, including HR departments for sexual harassment. The upside for an aspirant could be a research position at her dream AI company through a connection she makes here. The downside could be getting drugged and raped if her new “acquaintance” decides to put LSD in her water.
Hacker houses would then give the AI company’s employees informal networking grounds to conduct AI-related practices while the companies derisk themselves from liability. Which makes this a very different situation from criminal activity at the local grocery score.
Thanks, this is really helpful.
I think there may be two potential, interrelated theories of responsibility here—that the hacker houses are “professional grounds” more generally, and that they are being used to conduct the specific company’s business.
I am more inclined to put investigative/adjudicative responsibility on the employer where the environment was used by one of the employer’s agents to promote the employer’s objectives, and either the employer-specific influence was pervasive or there was a sufficient nexus between the employer-promoting activity and the misconduct. To illustrate the two possible poles here:
Sam works for SmallAI. He attends a party at Hacker House. Neither Bob nor anyone else from SmallAI has ever conducted even informal recruiting or other activities to promote SmallAI’s specific interests at Hacker House. Sam assaults another attendee.
Bob works for BigAI. BigAI encourages informal recruitment, and Bob has been informally recruiting someone. At a Hacker House party, he assaults that person.
Although Hacker House may be professional grounds, it does not appear to be SmallAI’s professional grounds. Sam’s interactions are solely in his personal capacity. Unless there’s more, I don’t see any action (or acquiescence) by SmallAI here that I could use to justify tagging it with responsibility to investigate and adjudicate things that happen at Hacker House.
In contrast, BigAI has a responsibility to investigate and adjudicate Bob’s actions. It acquiesced to, if not encouraged, Bob’s informal recruitment activities and stood to benefit from them. There was a strong connection between the recruitment activities and the assault (i.e., both involved the same person).
And of course, there are a hundred other scenarios between those poles, which is where things get complicated for me.
Thanks for this, this is interesting.
I am sure there are cleaner cases, like your “Bob works for BigAI” example, where taking legal action, and amplifying in media, could produce a Streisand effect that gives cultural awareness to the more ambiguous cases. Some comments:
Silicon Valley is one “big, borderless workplace”
Silicon Valley is unique in that it’s one “big, borderless workplace” (quoting Nussbaum). As she puts it:
Therefore, policing along clean company lines becomes complicated really fast. Even if Bob isn’t directly recruiting for BigAI (but works for BigAI), being in Bob’s favor could improve your chances of working at to SmallAI, which Bob has invested in.
The “borderless workplace” nature of Silicon Valley, where company lines are somewhat illusory, and high-trust social networks are what really matter, is Silicon Valley’s magic and function. But when it comes to policing bad behavior, it is Silicon Valley’s downfall.
An example that’s close to scenarios that I’ve seen
Alice is an engineer at SmallAI who lives at a SF hacker house with her roommates Bob, Chad, and Dave, who all work for BigAI. The hacker house is, at first, awesome because Bob, Chad, and Dave frequently bring in their brilliant industry friends. Alice gets to talk about AI everyday and build a strong industry network. However, there are some problems.
Chad is very into Alice and comes into her room often. Alice has tried to set a firm boundary, but Chad not picking up on it, whether intentionally or not. Alice starts getting paranoid and is very careful to lock her room at night.
Alice does not want to alienate Chad, who has a lot of relationships in the industry. She feels like she’s already been quite firm. She feels like she cannot tell Bob or Dave, who are like Chad’s brothers. She’s afraid that Bob or Dave may think she’s making a big deal over nothing.
Alice’s friend Bertha, who is an engineer at MidAI, has stopped coming to the hacker house because, as she tells Alice, she finds their parties to be creepy. At the last party, the hacker house hosted a wealthy venture capitalist from AI MegaInvestments who appeared to be making moves on a nineteen-year-old female intern at LittleAI. When Bertha tried to say something, Bob and Chad seemed to get really annoyed. Who does Bertha think she is, this random MidAI employee? The venture capitalist might invest in their spin-off from BigAI one day! Bertha stops coming to the hacker house, and her network slightly weakens.
Alice also debates distancing herself from her house. She secretly agrees with Bertha, and she’s finding Chad increasingly creepy. However, she’s forged such an incredible network of AI researchers at their parties—it all must be worth it, right? Maybe one day she’ll also transition into BigAI, because she now knows a ton of people there.
One day, Alice’s other ML engineer friend, Charlotte, visits their hacker house. She’s talking a lot to Bob, and it looks like they’re having a good time. Alice does not hear from Charlotte for awhile, and she doesn’t think much of it.
Six months later, Charlotte contacts Alice to say that Bob brought her to his bedroom and assaulted her. Charlotte has left Silicon Valley because of traumatic associations. Alice is shocked and does not know what to do. She doesn’t want to confront or alienate Bob. Bob had made some comments that Charlotte had been acting kind of “hysterical.” And what if Charlotte is lying? She and Charlotte aren’t that close anyway.
Nevertheless, Alice starts to feel increasingly uncomfortable at her hacker house and eventually also leaves.
As you can see, Alice’s hacker house is now a clusterf*ck. Alice, Bertha, and Charlotte have effectively been driven from the industry due to cultural problems, while Bob, Chad, and Dave’s networks continue to strengthen. This scenario happens all the time.
Proposal
I propose that the high-status companies and VC firms in Silicon Valley (e.g. OpenAI, Anthropic, Sequoia, etc) could make more explicit that they are aware of Silicon Valley’s “big, borderless workplace” nature. Sexual harassment at industry-related hacker houses, co-working spaces, and events, even when not on direct company grounds, reflects the company to some extent, and it is not acceptable.
While I don’t believe these statements will deter the most severe offenders, pressure from institutions/companies could weaken the prevalent bystander culture, which currently allows these perpetrators to continue harassing/assaulting.
Thank you for explaining the “Big borderless workspace” concept. This is the first time I have seen a reasonable-looking argument in favour of company policies restricting employees’ actions outside work, something which I had previously seen as a pure cultural-imperialist power grab by oppressive bosses.
I think this framing is helpful: one could push for companies to take a stance as described in your proposal, and publicize whether or not they had done so. Good talent has options, and hopefully a decent fraction of that talent would prefer not to work for a company that wasn’t doing its part in addressing sexual harassment in the subculture.
The details would be tricky—a statement of disapproval would probably not accomplish much without some sort of commitment to enforcement action. In other words, I think the effectiveness of AI Corp.’s statement is contingent there being a policy or practice with some teeth / consequences behind it. Otherwise, it seems pretty performative.
I started writing a list of concerns AI Corp. might have with such a proposal, in an attempt to fashion it in a way that maximizes the possibility of getting at least one major AI firm to agree to it—and thus pressuring the others to follow suit. But I decided that might be responding to a version of the proposal that might not be what you had in mind.
I think the key design elements would include whether there was an enforcement mechanism, and if so what it would be triggered by.
One of the challenges here, I think, would be delineating differences (if any) in what is acceptable (or at least actionable) in the workplace / with co-workers vs. in the hacker-house subculture vs. in the broader world. I consider my views pretty strict on harassment/EEO matters in the workplace. But part of the reason I’m willing to make employees potentially walk on eggshells there is that workplace harassment/EEO law generally only applies at the workplace, with co-workers, and in other circumstances with a clear nexus to employment. The risk of workplace harassment/EEO policies suppressing acceptable sexual behavior is not as great a concern to me because the policies cover only a slice of the individual’s life, leaving lots of opportunities for sexual expression off the job. To the extent that a policy is going to cover the employee’s primary subculture, it is going to affect much more of the employee’s life, and the risk of a chilling effect on acceptable sexual behavior seems potentially more relevant.[1]
It’s certainly possible that I would reach similiar results about what is acceptable / actionable in the subculture as someone whose view of what was acceptable / actionable in the workplace was less strict than mine, but applied much the same standard to the subculture as to the workplace.
When do these exceptions apply? They may here, if the same people who showed such poor judgement in other contexts also have decision-making power over high-leverage systems.
[Caveat: I’m reporting what I perceive as the social norms—I would not personally want low-character or poor-judgment people working for me.]
As I see it, the societal expectation I referenced as an example generally kicks in when the connection between the specific off-work conduct and the person’s job duties is particularly strong and there is a meaningful risk of harm to people other than the corporation. I don’t think there is the kind of direct connection as in my examples for the median technical AI role here.
I think society’s expectations about addressing off-duty conduct increase as you go up the corporate food chain—they would be different for a senior executive than for a non-supervisory engineer. So I think society would generally expect mid-size+ corporations to investigate and adjudicate substantial claims that their CEO committed a sexual assault after being hired and while off-duty, even absent any connection to corporate activity.
This is mostly a rehash of things already said by others, but my read is still that the version of that statement that has ‘SFBA’ instead of ‘EA’ in it is the only thing resembling a first approximation, and EA would only appear from a 2nd approximation onwards. E.g. to my knowledge I don’t know anyone who lives in a hacker house, and I’d never heard of the phenomenon before the TIME article.
In general I’m in favour of warning people about (even potentially) bad actors/groups and toxic cultish behaviour, and have done so previously. I just don’t see how it isn’t counterproductive to tell women that a movement is “plagued” by something appears to centre on a city where <7% of people who identify with the movement live (based on the 2020 EA survey). [I take your point that this toxic group of people has branched beyond SF, but it still seems very much centred there].
Yeah, I see your point, SFBA as the first approximation makes sense to me!
What’s the reference class for “the wider community”? It’s plausible that a comparison would differ based on the reference class.
What broad communities do you have in mind as being better re sexual assault?
Based on demographics alone, I’d predict lower rates in broader American society (based on males being much more likely to commit assault, and the strong trend toward desistance from crime by the time a person turns 35 or so).
Intuitively that doesn’t seem like the right base rate to me, even if the reference class is the whole of society? If the average woman considering getting involved in EA is in her early to mid twenties (e.g. the average female EAG attendee was 28 I believe), I would guess that the average age of the men she interacts with would be much lower than the population average? Especially if she is a student.
In terms of the reference class I had in mind, it was something like, ’for a given cluster of EAs that are attached to another subculture, EAs would have on average less sexism and abuse than that subculture within. So e.g. EAs within the tech scene, EAs within the Burning Man scene, within various academic scenes, etc. Interested in your thoughts on that.
I don’t have the data to speculate—that’s why we need robust data collection.
The comment that started this discussion was:
From the original commenter’s perspective, he would likely advise his friends in comparison to age-matched society as a whole (not adjusting for gender imbalance in EA, not adjusting for EAs being attached to high-sexism/abuse subcultures.”
The commenter’s statement isn’t inconsistent with the hypothesis that (e.g.) EAs within the tech scene display less sexism and abuse than people in the tech scene as a whole. It’s plausible that EAs tend to be “attached to . . . subculture[s]” that have very high rates of sexism and abuse relative to age-matched members of society as whole. There could be lower rates of sexism and abuse among EAs in those subcultures than among other subculture members . . . but still high compared to age-matched society as a whole.
Hi Rebecca I do take seriously your points especially as you have first hand experience navigating EA as a woman. And I really do want you to be correct. However, i can’t shake the feeling that EA is not a safe place for women, especially in the bay area/in ai safety. I feel left with 2 choices: warn women or just don’t encourage women to engage with ea. I believe the former is better as they can then choose and I placed more agency with the women i talk with.
Also, i have heard too many first hand accounts of really sub standard behaviour and have even once been subject to inappropriate comments about body size from strangers at an ea social. I am about 80% confident we have a major issue with inappropriate behaviour in EA. If there was an alternative community just without the bad behaviour I would switch in an instant. But because there is not i am determined to do what I can to fix things.
And I know I’m not responding directly to the post, but I feel really upset about all these posts on abuse and aggressions and maybe I am just venting. But hopefully I am also showing others who feel like me that they are not alone and maybe hopefully building some marginal amount of momentum towards change.
I’m sorry that had to be subject to those inappropriate comments
Thanks! I felt pretty bad and left the event but compared to what other people go through in EA it was peanuts. But it makes me update towards thinking there is something that is holding the EA movement back as too many people have these negative experiences—when my own experience to some degree match that of the way too many posts here on the EAF about inappropriate behavior and the toll that takes on people who are subject to it.
Thank you for this comment. You’ve made some things explicit that I’ve been thinking about for a long time. It feels analogous to saying the emperor has no clothes.
The idea of having fewer AI alignment researchers, but those researchers having more intensive ethical training, is compelling.
There is currently a huge vacuum in mentorship for men about how to interact with women (hence the previously burgeoning market of red pill, dating coaches, Jordan Peterson, etc). More thought leadership by men who have healthy relationships with women would be a service to civilization. Maybe you should write some blog posts :).
As for the rest of your comment, I responded below to Rebecca.
Thanks for taking time to respond so thoughtfully Lucrectia! I am considering many things to improve things especially in EA:
-Make a reading group on allyship happen, with a focus on EA (to anyone reading this: please let me know if you are a self identified man and want to be part of this!)
-Try to find a way to talk to and understand the men who have conflicted feelings about gender equality etc. (to anyone who might read this: please let me know if you would like to talk—I understand trust can be an issue but I think we can work through that)
-Write posts e.g. here on the EAF, but I am unsure of the balance between taking a stance and taking up too much space
-Do my share of reproductive work at home with a wife who is a successful and ambitious academic and with 2 small kids—I feel like this should be my first priority!
This is a great list! I think this one is extremely valuable and something that men may be better equipped to do than I would:
I’d love to write another post about this too, targeted at men who have conflicted feeling about gender equality, sexual violence, etc. The problem with this current post is it may be preaching to the choir :) Someone (probably me) needs to shill AI Twitter with these ideas, but rebranded to the average mid-twenties male AI researcher. “Fighting bad actors in AI” has been one message I’ve been playing with.