AI Threat Countermeasures checks bad actors in AI where companies and institutions are not incentivized to.
Lucretia
I do not see valid claims that L’s report was false on the post you link, and to be totally honest, this comment is a bit of a red flag.
I don’t see anything on the linked post in this comment that L’s report was false from legitimate sources.
I can’t comment on whether these cases were EA involved because I don’t know.
As you said, the Silicon Valley AI community is extremely small, which makes this relevant to the EA AI sphere, and AI safety more broadly.
Thank you for your comment. I understand promoting narratives that autistic men may be more likely to be sexual predators is deeply unfair and encourages neurotype discrimination (and tracks alongside some racism narratives).
That said, I don’t think this post is saying that, nor is that the point of the post. I think it’s pointing that this has historically correlated with risk factors for all genders. I have also seen (usually wealthy, high status) men use autism as an excuse for boundary violating behavior (they may not even be autistic in the first place, lol).
I would love to find a way to talk about this that does not unfairly condemn non-predatory autistic men.
It is pretty notable how frequently bad actors / bystanders out themselves on this forum if you watch for red flags.
UPDATE 10/8/23
For an update on the Silicon Valley AI bad actor situation, I recommend this expose by @Mandelbrot.
Thank you for this absolutely brilliant expose. I know too many people who have stories like these ones.
I worry about the broader effects on AI alignment, given that Silicon Valley AI is somewhat selecting for bad actors.
I have a lot more to stay but will take some time to process everything here first.
Thanks for this, this is interesting.
I am sure there are cleaner cases, like your “Bob works for BigAI” example, where taking legal action, and amplifying in media, could produce a Streisand effect that gives cultural awareness to the more ambiguous cases. Some comments:
Silicon Valley is one “big, borderless workplace”
Silicon Valley is unique in that it’s one “big, borderless workplace” (quoting Nussbaum). As she puts it:
Even if you are not currently employed by Harvey Weinstein or seeking employment within his production company, in a very real sense you always are seeking employment and you don’t know when you will need the good favor of a person of such wealth, power, and ubiquitous influence. (Source.)
Therefore, policing along clean company lines becomes complicated really fast. Even if Bob isn’t directly recruiting for BigAI (but works for BigAI), being in Bob’s favor could improve your chances of working at to SmallAI, which Bob has invested in.
The “borderless workplace” nature of Silicon Valley, where company lines are somewhat illusory, and high-trust social networks are what really matter, is Silicon Valley’s magic and function. But when it comes to policing bad behavior, it is Silicon Valley’s downfall.
An example that’s close to scenarios that I’ve seen
Alice is an engineer at SmallAI who lives at a SF hacker house with her roommates Bob, Chad, and Dave, who all work for BigAI. The hacker house is, at first, awesome because Bob, Chad, and Dave frequently bring in their brilliant industry friends. Alice gets to talk about AI everyday and build a strong industry network. However, there are some problems.
Chad is very into Alice and comes into her room often. Alice has tried to set a firm boundary, but Chad not picking up on it, whether intentionally or not. Alice starts getting paranoid and is very careful to lock her room at night.
Alice does not want to alienate Chad, who has a lot of relationships in the industry. She feels like she’s already been quite firm. She feels like she cannot tell Bob or Dave, who are like Chad’s brothers. She’s afraid that Bob or Dave may think she’s making a big deal over nothing.
Alice’s friend Bertha, who is an engineer at MidAI, has stopped coming to the hacker house because, as she tells Alice, she finds their parties to be creepy. At the last party, the hacker house hosted a wealthy venture capitalist from AI MegaInvestments who appeared to be making moves on a nineteen-year-old female intern at LittleAI. When Bertha tried to say something, Bob and Chad seemed to get really annoyed. Who does Bertha think she is, this random MidAI employee? The venture capitalist might invest in their spin-off from BigAI one day! Bertha stops coming to the hacker house, and her network slightly weakens.
Alice also debates distancing herself from her house. She secretly agrees with Bertha, and she’s finding Chad increasingly creepy. However, she’s forged such an incredible network of AI researchers at their parties—it all must be worth it, right? Maybe one day she’ll also transition into BigAI, because she now knows a ton of people there.
One day, Alice’s other ML engineer friend, Charlotte, visits their hacker house. She’s talking a lot to Bob, and it looks like they’re having a good time. Alice does not hear from Charlotte for awhile, and she doesn’t think much of it.
Six months later, Charlotte contacts Alice to say that Bob brought her to his bedroom and assaulted her. Charlotte has left Silicon Valley because of traumatic associations. Alice is shocked and does not know what to do. She doesn’t want to confront or alienate Bob. Bob had made some comments that Charlotte had been acting kind of “hysterical.” And what if Charlotte is lying? She and Charlotte aren’t that close anyway.
Nevertheless, Alice starts to feel increasingly uncomfortable at her hacker house and eventually also leaves.
As you can see, Alice’s hacker house is now a clusterf*ck. Alice, Bertha, and Charlotte have effectively been driven from the industry due to cultural problems, while Bob, Chad, and Dave’s networks continue to strengthen. This scenario happens all the time.
Proposal
I propose that the high-status companies and VC firms in Silicon Valley (e.g. OpenAI, Anthropic, Sequoia, etc) could make more explicit that they are aware of Silicon Valley’s “big, borderless workplace” nature. Sexual harassment at industry-related hacker houses, co-working spaces, and events, even when not on direct company grounds, reflects the company to some extent, and it is not acceptable.
While I don’t believe these statements will deter the most severe offenders, pressure from institutions/companies could weaken the prevalent bystander culture, which currently allows these perpetrators to continue harassing/assaulting.
This is a great list! I think this one is extremely valuable and something that men may be better equipped to do than I would:
Try to find a way to talk to and understand the men who have conflicted feelings about gender equality etc. (to anyone who might read this: please let me know if you would like to talk—I understand trust can be an issue but I think we can work through that)
I’d love to write another post about this too, targeted at men who have conflicted feeling about gender equality, sexual violence, etc. The problem with this current post is it may be preaching to the choir :) Someone (probably me) needs to shill AI Twitter with these ideas, but rebranded to the average mid-twenties male AI researcher. “Fighting bad actors in AI” has been one message I’ve been playing with.
Yeah, I see your point, SFBA as the first approximation makes sense to me!
There are exceptions, to be sure. For instance, some sorts of conduct implicate fitness to hold certain roles (e.g., a professional truck driver who drives drunk off the clock, someone with significant discretionary authority over personnel matters who engages in racist conduct).
When do these exceptions apply? They may here, if the same people who showed such poor judgement in other contexts also have decision-making power over high-leverage systems.
Yeah, this is interesting. I would invoke some of the content from Citadels of Pride here, where we can draw an analogy between Silicon Valley and Hollywood.
I would argue that hacker houses are being used as professional grounds. There is such an extent of AI-related networking, job-searching, brainstorming, ideating, startup founding, angel investing, and hackathoning that one could make an argument that hacker houses are an extension of an office. Sometimes, hacker houses literally are the offices of early stage startups. This also relates to Silicon Valley startup culture’s lack of distinction between work and life.
This puts a vulnerable person trying to break into AI in a precarious position. Being in these environments becomes somewhat necessary to break in; however, one has none of the legal protections of “official” networking environments, including HR departments for sexual harassment. The upside for an aspirant could be a research position at her dream AI company through a connection she makes here. The downside could be getting drugged and raped if her new “acquaintance” decides to put LSD in her water.
Hacker houses would then give the AI company’s employees informal networking grounds to conduct AI-related practices while the companies derisk themselves from liability. Which makes this a very different situation from criminal activity at the local grocery score.
Thank you for this comment. You’ve made some things explicit that I’ve been thinking about for a long time. It feels analogous to saying the emperor has no clothes.
I am growing increasingly concerned that the people supposedly working to protect us from unaligned AI have such weak ethics. I am wondering if a case can be made for it being better to have a small group of high integrity people work on AI safety than to have even a twice as large group comprised 50% of low-integrity individuals. I wouldn’t want a bank robber to safeguard democracy, for example.
The idea of having fewer AI alignment researchers, but those researchers having more intensive ethical training, is compelling.
Actually, some of my best mentors around sexuality has been my female friends. I really recommend men foster deep, meaningful friendships with heterosexual women. When they tell you about their dating experiences you will very quickly understand how to behave around women you are interested in sexually.
There is currently a huge vacuum in mentorship for men about how to interact with women (hence the previously burgeoning market of red pill, dating coaches, Jordan Peterson, etc). More thought leadership by men who have healthy relationships with women would be a service to civilization. Maybe you should write some blog posts :).
As for the rest of your comment, I responded below to Rebecca.
I do think “EA is plagued with sexism, racism, and abuse” is a very very granular first approximation for what’s actually going on.
A better, second approximation may look like this description of “the confluence”:
“The broad community I speak of here are insular, interconnected subgroups that are involved most of these categories: tech, EA (Effective Altruists), rationalists, Burning Man camps, secret parties, and coliving houses. I’ve heard it referred to as a “clusterf**k” and “confluence”, I usually call it a community. The community is centered in the San Francisco bay area but branch out into Berlin, London/Oxford, Seattle, and New York. The group is purpose-driven, with strong allegiance to non-mainstream morals and ideas about shaping the future of society and humanity” (Source)
There is probably an even better third approximation out there.
I do think that these toxic dynamics largely got tied to EA because EA is the most coherent subculture that overlaps with “the confluence.” Plus, EA was in the news cycle, which made journalists incentivized to write articles about it, especially when “SBF” and “FTX” get picked up by search engines and recommender systems. EA is a convenient tag word for a different (but overlapping) community that is far more sinister.
As I wrote in my response above, I’m mainly sad that my experience of EA was through the this distorted lens. It also seems clear to me that there are large swathes (perhaps the majority?) of EA that are healthy and well-meaning, and I am happy this has been your experience!
One of my motives for writing this post was giving people a better “second approximation” than EA itself being the problem. I do believe people put too much blame on EA, and one could perhaps make the argument that more responsibility could be put on surrounding AI companies, such as OpenAI/Anthropic, some of whose employees may be involved in these dynamics through the hacker house scene.
- 19 Jun 2023 15:04 UTC; 6 points) 's comment on Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley by (
Thank you, I’m glad that you appreciate my post. I do want to write more, and would be happy to know if there are any topics in particular that are of interest.
Thank you for your kind words, and for the work you are doing! I haven’t been in Boston since ~2017, but I messaged you with what I know.
Yeah, I am mainly really sad that my experience in EA/EA-adjacent communities was through the distorted lense of these redpilled AI and AI safety researchers. But I hope to engage with the more productive part (and seemingly majority!) of the EA community going forward!
- 19 Jun 2023 14:49 UTC; 7 points) 's comment on Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley by (
Anecdotally, I have heard ~3 stories of premeditated psychedelic assault (the guy sometimes even explicitly says his intent to others, sometimes realizing this is bad, sometimes not realizing this is bad). I have heard ~2 stories of psychedelic date rape assault that were probably not premeditated, and ~2 stories that were ambiguous. I don’t know how well this reflects base rates. The ~3 stories of premeditated assault may be passed on through word of mouth more frequently, because they’re more obviously frightening.
But more importantly, I should have been more clear in my original post. Psychedelic assault is bad, whether or not it’s premeditated. I wrote more in this response about premediated vs not-premeditated assault.
Update:
I’m going to take a stab at a framework. This is the first time I’ve written this down, so consider this possibly prone to errors and in draft status.
Instead of lumping all types of sexual harassment/abuse together, we could view sexual abuse/harassment as structurally similar to first, second, and third degree homicide, with varying degrees of intent.
Type 1 sexual abuse/harassment may involve calculation and intent. Epstein, Weinstein, and Ratrick’s actions above in studying red pill scripts would fall under Type I. You can see that this behavior was premeditated.
Type 2 sexual abuse/harassment may be analogous to “crimes of passion,” such as a hypothetical guy becoming overtaken with desire and not checking in on the woman. But it is not a premeditated offense.
To be clear, both types are bad. I don’t think parts of Silicon Valley take either type seriously enough. Type 1 and Type 2 is also a spectrum, and repeat offenders may have a mix of both. However, perhaps the offenses should be treated differently.
I think people get squeamish when you try to lump adolescent boy who’s learning boundaries and who is clumsy with consent, but open to correction (Type 2), with someone like Epstein, who is a calculated offender (Type 1). This may also be one reason why the #MeToo movement got cancelled; there may have been a lumping of Type 2 with Type 1 offenders in a way that made some people think parts of the movement were “unreasonable.”
I think restorative justice fails when it is assumed that the offender is a Type 2 offender who is open to correction. But oftentimes, the offender is actually a Type 1 offender who is manipulating his community into thinking he is a Type 2 offender. Or, even if the crimes are not premeditated, they are serial, and the offender is not open to correction but pretends to be, which would make him Type 1.
So perhaps severity of consequences of sexual harassment/abuse should be modulated by: a) “How premeditated was the act?” and b) “Is this a repeated offense?,” which together could classify the offense into a Type 1 or Type 2 offense.
- 14 Jun 2023 16:51 UTC; 22 points) 's comment on Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley by (
Thank you a lot for this comment. I am honestly surprised and saddened by the number of downvotes on Mandelbrot’s post and think it’s ironically reflective of the issue she is drawing attention to.