Formerly Executive Director at BERI; now Secretary and board member. Current board member at SecureBio and FAR.AI, where Iâm also the Treasurer.
sawyerđ¸
Love this, great work. I especially appreciate your honest opinions on what mistakes you think you made and how the survey could have been improved. If JERIS continues next year, those thoughts will enable a lot of improvement!
Consider adding the Berkeley Existential Risk Initiative (BERI) to the list, either under Professional Services or under Financial and other material support. Suggested description: âSupports university research groups working to reduce x-risk, by providing them with free services and support.â
Great post. This put words to some vague concerns Iâve had lately with people valorizing âagent-yâ characteristics. Iâm agentic in some ways and very unagentic in other ways, and Iâm mostly happy with my impact, reputation, and âsocial footprintâ. I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).
This post, especially the section on âAssuming that it is low-cost for others to say ânoâ to requestsâ reminded me of Deborah Tannenâs book Thatâs Not What I Meant â How Conversational Style Makes or Breaks Relationships. I found it really enlightening, and Iâd recommend it for help understanding the unexpected ways other people approach social interactions.
Good catch, thanks! I canât find my original quote, so I think this was a recent change. I will edit my post accordingly.
Great points, thanks David. I especially like the compare and contrast between personal connections and academic credentials. I think probably youâre more experienced with academia and non-EA philanthropy than I am, so your empirical views are different. But I also think that even if EA is better than these other communities, we should still be thinking about (1) keeping it that way, and (2) maybe getting even less reliant. This is part of what I was saying with:
None of this is unique to EA. While I think EA is particularly guilty of some of these issues, in general I could aim this criticism in any direction and hit someone guilty of it. But âeveryone else does itâ is not in and of itself a reason to accept it. We claim to be doing something really difficult and important, so we should try to be as good as possible.
I think your observations may be counterevidence to anyone saying that EA should become more reliant on personal connections. Since you think (possibly correctly) that other major philanthropy is more reliant on personal connections than EA is, and I assume we agree that EA philanthropy is better than most other major philanthropy.
I think the extent to which âmember of the EA communityâ comes along with a certain way of thinking (i.e. âa lot of useful framesâ) is exaggerated by many people Iâve heard talk about this sort of thing. I think ~50% of the perceived similarity is better described as similar ways of speaking and knowledge of jargon. I think that there actually not that many people who have fully internalized new ways of thinking that are 1.) very rare outside of EA, and 2.) shared across most EA hiring managers.
Another way to put this would be: I think EA hiring managers often weight âmembership in the EA communityâ significantly more highly than it should be weighted. I think our disagreement is mostly about how much this factor should be weighted.
Fair point on the fast changing thing. I have some thoughts, but theyâre not very clear and I think what you said is reasonable. One very rough take: Yes youâd still the people you know, but you might go from, âI know 50% of the people in AI alignmentâ to âI know 10% of the people in AI Alignmentâ in 3 months, which could be disorienting and demoralizing. So itâs more of a relative thing than the absolute number of people you know.
Explicitly asking for a reference the head organizer knows personally.
That feels pretty bad to me! I can imagine some reason that this would be necessary for some programs, but in general requiring this doesnât seem healthy.
I find the request for references on the EA Fundsâ application to be a good middle-ground. Thereâs several sentences to it, but the most relevant one is:
References by people who are directly involved in effective altruism and adjacent communities are particularly useful, especially if we are likely to be familiar with their work and thinking.
Itâs clearly useful to already be in the fund managersâ network, but itâs also clearly not required. Of course thereâs always a difference between the policy and the practice, but this is a pretty good public policy from my perspective.
Thanks Chi, this was definitely a mistake on my part and I will edit the post. I do think that your websiteâs âGet Involvedâ â âCLR Fundâ might not be the clearest path for people looking for funding, but I also think I should have spent more time looking.
Thanks for the thoughtful feedback Chris!
I think that the author undervalues value alignment and how the natural state is towards one of regression to the norm unless specific action is taken to avoid this
I think there is difference between âvalue alignmentâ and âpersonal connectionâ. I agree that the former is important, and I think the latter is often used (mostly successfully) as a tool to encourage the former. I addressed one aspect of this in the Hiring Managers section.
I agree that as EA scales, we will be less able to rely personal relationships, but I see no reason to impose those costs now
Fair, but I worry that if weâre not prepared for this then the costs will be greater, more sudden, and confusing, e.g. people starting to feel that EA is no longer fun or good and not knowing why. I think itâs good to be thinking about these things and make the tactical choice to do nothing, rather than leaving âoverreliance on personal connections can be badâ out of our strategic arsenal completely.
I agree that it may affect our reputation in the outside world, but I donât think itâs worth increasing the risk of bad hires to attempt to satisfy our critics.
I donât think my suggestions for hiring managers would increase the risk of bad hires. In fact, I think moving away from âmy friend is friends with this personâ and towards âthis person demonstrates that they care deeply about this missionâ would decrease the risk of bad hires. (Sorry if this doesnât make sense, but I donât want to go on for too long in a comment.)
tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.
I completely agree! I think probably some combination is best, and/âor it could differ between subcommunities.
Also thanks for pointing out the FTX Future Fundâs experience, Iâd forgotten about that. I completely agree that this is evidence against my hypothesis, specifically in the case of grantee-grantor relationships.
Great point about mitigating as opposed to solving. Itâs possible that my having a âsolutionsâ section wasnât the best framing. I definitely donât think personal connections should be vilified or gotten rid of entirely (if that were even possible), and going too far in this direction would be really bad.
Thanks Stefan! I agree with those strengths of personal connections, and I think there are many others. I mainly tried to argue that there are negative consequences as well, and that the negatives might outweigh the positives at some level of use. Did any of the problems I mentioned in the post strike you as wrong? (Either you think they donât tend to arise from reliance on personal connections, or you think theyâre not important problems even if they do arise?)
EA is too reÂliÂant on perÂsonal connections
I think this is a good idea as a neutral tracking resource, but I might be against it if it had the effect of heaping additional praise on the billionaires. (I donât like Elliotâs Impact List idea.) I think transparency is good.
Will you be taking open applications from organizations looking for funding?
Hi Lucas! If youâre still looking, you might consider applying for the Deputy Director position at the Berkeley Existential Risk Initiative. Let me know if you have any questions.
Iâm excited to see this happening and I think youâre one of the better people to be launching it. I think thereâs probably some helpful overlap with BERIâs world here, so please reach out if youâd like to talk about anything.
The Berkeley Existential Risk Initiative (BERI) is seeking a Deputy Director to help me grow BERIâs university collaborations program and create new programs, all with the mission of improving human civilizationâs long-term prospects for survival and flourishing.
This is BERIâs first âcoreâ hire since I was hired 3 years agoâall of our hires since then are embedded at some particular research group, and arenât responsible for running BERI as an organization.
This is a great opportunity for an early- to mid-career person with some experience and interest in operations. I expect that the Deputy Director will contribute substantially to BERIâs strategy and direction moving forward.
Iâd prefer the person to be based in New York City (where I am), but remote is also an option. The position is full-time, and I expect the salary range to be $70-100k/âyear.
Hereâs a bunch of links to persuade you to work at BERI:
My thoughts on what BERI is and why I think itâs important
Results from BERIâs collaborator survey in January (tl;dr people like BERI)
Annual reports for 2020 and 2021, which show how much BERI has done and include thoughts on future directions.
As in, you canât publish a post without at least one tag?