I have emailed her and will update this comment when she gets back! I think there was an ~8-page questionnaire that evolved over time (since there were probably about 12 nannies/au pairs, and lessons were learned along the way) and a Skype interview, though.
lilly
Thanks so much for these recommendations! They’re really helpful, and I’m likely to donate to one of the recommended organizations this giving season.
I do have a question: which of the recommended organizations have close ties to EA? I realize that “close ties” is a vibe-y concept, but things like “incubated by CE,” “director has been involved in EA since 2015,” or “received most of their funding from EA funders prior to being recommended by ACE” would count. (I’d be eager to hear others’ input on how I’m cashing “close ties” out.)
The reason I ask is not because being closely tied to EA is a bad thing; clearly, if someone is an EA, and starts an impactful charity based on ITN reasoning etc etc this is not an argument against funding them. That said, I do think EA is rife with conflicts of interest, and that (1) this does presumably have an effect on who receives grants/support/endorsements, so I’d likely subject these organizations to closer scrutiny before donating and (2) in general, I think we should strive to be as transparent as possible about this stuff.
Interesting! I’d like to see an analysis of things correlated with most/all of the children in one family turning out well, because I’d be more inclined to emulate the parenting style of parents where (1) all of their kids became happy, reasonably successful, well-adjusted adults than ones where (2) one kid became a superstar.
Combination of full-time nannies (who probably worked 40 hrs/week and didn’t live with us) before we were school-aged and live-in au pairs when we were school-aged (who probably worked 6:30-9am and 4-8pm on weekdays, and maybe one full day/weekend).
Thanks for writing this—super helpful. Just one anecdote on the childcare front: my siblings and I had full-time nannies/au pairs from when we were babies until we could drive, because our parents worked full-time and often traveled. (My mom had an intensive screening process for said nannies/au pairs, and chose excellent ones.) I view this as having been a really good thing for my development—I became less shy, valued my time with my parents more, learned about other parts of the world/cultures/ways of life, was mentored by women in their 20s as a pre-teen/teenager, and developed close relationships with some amazing people. I think parents sometimes view hiring external childcare as a necessary evil, but for me (and, I think, my siblings) it was a really positive aspect of our childhoods.
I do think the portrayal of EAs could be worse, but it’s pretty bad? EAs are accused of being hypocritical (e.g., way more concerned with money than they would care to admit), culty, overly trusting, overconfident, and generally uncool.
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a “very serious issue,” and lacking “any perspective at all.”
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.
I like how you’re characterizing this!
I get that the diagram is just an illustration, and isn’t meant to be to scale, but the EA portion of the GHD bubble should probably be much, much smaller than is portrayed here (maybe 1%, because the GHD bubble is so much bigger than the diagram suggests). This is a really crude estimate, but EA spent $400 million on GHD in 2021, whereas IHME says that nearly $70 billion was spent on “development assistance for health” in 2021, so EA funding constitutes a tiny portion of all GHD funding.
I think this matters because GHD EAs have lots and lots of other organizations/spaces/opportunities outside of EA that they can gravitate to if EA starts to feel like it’s becoming dominated by AI safety. I worry about this because I’ve talked to GHD EAs at EAGs, and sometimes the vibe is a bit “we’re not sure this place is really for us anymore” (especially among non-biosecurity people). So I think it’s worth considering: if the EA community further grows the AI safety field, is this liable to push non-AI safety people—especially GHD people, who have a lot of other places to go—out of EA? And if so, how big of a problem is that?
I assume it would be possible to analyze some data on this, for instance: are GHD EAs attending fewer EAGs? Do EAs who express interest in GHD have worse experiences at EAGs, or are they less likely to return? Has this changed over time? But I’d also be interested in hearing from others, especially GHD people, on whether the fact that there are lots of non-EA opportunities around makes them more likely to move away from EA if EA becomes increasingly focused on AI safety.
What are examples of behaviors you engage in that you suspect are inconsistent with the values/behaviors most EAs would endorse, but that you endorse doing (i.e., because you disagree to some extent with standard EA values, or because you think that EAs draw the wrong behavioral conclusions on the basis of EA values)?
Examples would (probably) not be: “I donate to political campaigns because I think this may actually be high EV” [not inconsistent with EA values] or “I eat meat but feel bad about it” [not endorsed]
Examples might be: “I donate to a local homeless shelter because it’s especially important to me to support members of my community” [deviates from standard EA values] or “I eat chickens that were raised on a local farm because I think they have good lives” [different behavioral conclusions]
lilly’s Quick takes
I look forward to reading your point-by-point response. I suspect you will convince me that some of the events described in this post were characterized inaccurately, in ways that are unflattering to Nonlinear. However, I think it is very unlikely you will convince me that Nonlinear didn’t screw up in several big, important ways, causing significant harm in the process (for reasons along these lines).
I would thus strongly encourage you to also think about what mistakes Nonlinear made, and what things it is worth apologizing for. I think this would be very good for the community, since it would help us gain better insight into what went wrong here, and how to avoid similar situations going forward.
Fair enough! I think this discussion is being harmed by ambiguity about the behaviors we’re talking about (this is my fault; my posts have been unclear). I don’t think I’d classify “helping new hires find housing” as violating “standard/reasonable professional norms.”
I’m mainly thinking about the kinds of behaviors EAs engage in that are described in the above post (and my general heuristics about the kinds of practices that are normalized in the EA community). I do think that if you’re asking yourself something like “Should I live with my employee who has less power than me?” or “Should I use drugs with these colleagues?” it is better to err on the side of not doing this kind of stuff, at least at first. If after a year of working with people, you all decide to start smoking weed together, that strikes me as probably pretty innocuous (versus if you had established this kind of culture at the outset).
Thanks for your perspective on this!
Small-seeming requirements like “new hires have to find their own housing” can easily make the difference between being able to move quickly vs. slowly on some project that makes or breaks the company.
Do you have an example of this? It is surprising to me that maintaining reasonable/standard professional norms could actually sink a company. (Among other things because at a small company, you have limited manpower, and so personnel time devoted to helping someone find housing is presumably coming out of time spent somewhere else—i.e., working on the time-sensitive project.)
1): As my company has grown, we have had many forces naturally pushing in the direction of “more professional”: new hires tend to be much more worried about blame about doing things too quick-and-dirty rather than incurring costs on the business in order to do things the buttoned-up way; I’ve stepped in more often to accept a risk rather than to prevent one although I certainly do both!
I suspect we’re just defining “professional” differently here (or thinking about really different professional contexts), but my experience is pretty strongly informed by having worked in an office pre-COVID, and seeing how profoundly professional culture has eroded, and how hard it has been to build any of that back. I think grad students/academics who have taught undergrads post-COVID have also been struck by this: it seems like norms within education quickly (and understandably!) became quite lax during COVID, but it’s been quite difficult to reverse those changes (i.e., get students to turn stuff in on time, respond to emails, show up to mandatory events, etc). That said, I know that older people have always tended to think that the youth are a bunch of degenerates, so plausibly that’s coloring our perception here, too.
I think I largely agree with this list, and want to make a related point, which is that I think it’s better to start with an organizational culture that errs on the side of being too professional, since 1) I think it’s easier to relax a professional culture over time than it is to go in the other direction and 2) the risks of being too professional generally seem smaller than the risks of being insufficiently professional.
Yeah, again, I think you might well be right on the substance. I haven’t tweeted about this and don’t plan to (in part because I think virality can often lead to repercussions for the affected parties that are disproportionate to the behavior—or at least, this is something a tweeter has no control over). I just think EA has kind of a yucky history when it comes to being prescriptive about where/when/how EAs talk about issues facing the EA community. I think this is a bad tendency—for instance, I think it has, ironically, contributed to the perception that EA is “culty” and also led to certain problematic behaviors getting pushed under the rug—and so I think we should strongly err on the side of not being prescriptive about how EAs talk about issues facing the community. Again, I think it’s totally fine to explain why you yourself are choosing to talk or not talk about something publicly.
I think the substance of your take may be right, but there is something that doesn’t sit well with me about an EA suggesting to other EAs (essentially) “I don’t think EAs should talk about this publicly to non-EAs.” (I take it that is the main difference between discussing this on the Forum vs. Twitter—like, “let’s try to have EA address this internally at least for now.”) Maybe it’s because I don’t fully understand your justification—”there is room for people to walk back and apologize”—but the vibe here feels a bit to me like “as EAs, we need to control the narrative around this (‘there is an appropriate level of publicity,’)” and that always feels a bit antithetical to people reasoning about these issues and reaching their own conclusions.
I think I would’ve reacted differently if you had said: “I don’t plan to talk about this publicly for a while because of x, y, and z” without being prescriptive about how others should communicate about this stuff.
Yeah, my thought is pretty high-level, basically: a lot of professional norms exist for good reasons, and if we violate them—and especially if we violate a lot of them at the same time, as happened here—then this produces the kinds of circumstances in which these disputes tend to arise.
Certainly, there’s some cost-benefit here with respect to specific norms, and specific contexts, that could be, and I’m sure will continue to be, litigated. But everyone involved has been really harmed by this—in terms of their time being wasted, emotional energy sunk into this, and people’s reputations—and that just seems really unfortunate, given that it is not that hard to substantially reduce the risks of these kinds of things happening by adhering to standard professional norms.
This situation reminded me of this post, EA’s weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice’s allegations (which I do), it’s hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don’t live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben’s summary of Nonlinear’s response (also the linked job ad):
“Our intention wasn’t just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.”
“We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people.”
“The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn’t”
“The semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn’t push hard on this.”
I am reminded again that, while many professional norms are stupid, a lot of them exist for good reasons. Further, I think it’s often pretty easy to disentangle the stupid professional norms from the reasonable professional norms by just thinking: “Are there good reasons this norm exists?” (E.g., “Is there a reason employees and employers shouldn’t live together?” Yes: the power dynamics inherent to the employer/employee dynamic are at odds with healthy roommate dynamics, in which people generally shouldn’t have lots of power over one another. “Is there a reason I should have to wear high heels to work in an office?” …. no.) Trying to make employees part of your family unit, not negotiating financial details with your employees, covering your employees’ rent and groceries, and being in any way involved in your employees breaking the law are all behaviors that are at odds with standard professional practices, and there are very obviously good reasons for this.
- 7 Sep 2023 22:00 UTC; 25 points) 's comment on Sharing Information About Nonlinear by (
- 8 Sep 2023 8:51 UTC; 10 points) 's comment on Sharing Information About Nonlinear by (
Thanks for your thoughtful response! This is helpful and makes sense.
A few reactions:
It does seem like you/Max/the search committee have a lot of relevant experience, and I appreciate that Max erred on the side of understating this.
I am not shocked that EA organizations often haven’t found consulting outside firms to be super helpful, but I am surprised by how bad these experiences seem to have been.
My personal experience has been that their normal customers have such different desires/incentives that their guidance to us feels wildly off base. HR people, for example, assume we are in an adversarial relationship with our employees and become confused when we want to e.g. share more of certain types of information with them. Similarly, we often don’t seem to be able to get on the same page about valuing integrity or transparency for their own sake, not just the appearance of such things.
Three reactions: First, even if a lot of companies function in the way you describe, I assume there are a lot of non-profits that do not have adversarial relationships with their employees/donors? I also find it a bit implausible that EA organizations uniquely value integrity and transparency for their own sake (even if EA organizations do tend to place more of a premium on these things than other non-profits, which again, I think one could reasonably contest). Second, it seems like a good outside consultant could modify its approach in light of CEA saying “hey, we value these things.” I would assume that a critical part of the consultant’s role is understanding an organization’s specific values and goals—since doing so is paramount to, e.g., them being able to find an effective executive—and if they can’t do this effectively, then presumably they’re pretty bad at their jobs. Maybe I have too much faith in the free market, but I assume that if these external firms were charging a ton of money to be useless, organizations would stop relying on their services. And third, it seems possible for an external firm to a) not totally understand the importance of things like integrity and transparency to EA organizations but b) still be able to say what traits are important in an executive (in general), and help vet potential candidates (which it sounds like you maybe agree with).
3. A broader (more speculative) takeaway: my sense from your response is that EA organizations have, by and large, not figured out how to work effectively with outside consultants/firms. One might conclude that such efforts are doomed to fail because (some more nuanced version of) “they don’t get us.” But an alternate conclusion one might draw is that it’s hard to figure out how to work well with outside actors, and so there’s a learning curve that needs to be climbed here, but EA organizations need to find a way to climb this learning curve, because a) a tiny fraction of the talented/knowledgeable/experienced people in the world are involved in EA b) the EA community has important blind spots. So even if we expect that consulting an outside firm on any given project isn’t going to be super helpful to achieving the goals of that project (e.g., hiring a CEO), I’m still left thinking that EA organizations should err on the side of doing this, because it’s important to figure out how to do this well, and that requires practice.
Thanks for writing this up and for doing the fellowship! Would you mind saying a bit more about how participants’ career plans changed as a result of doing the fellowship (if you know) and/or how you plan to monitor their plans going forward?