I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated.
[Below written super fast, and while a bit sleep deprived]
An overly crude summary of my current picture is “if you do community-building via spoken interactions, it’s somewhere between “helpful” and “necessary” to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, being interested in direct work is correlated with a bunch of positive qualities that help with community-building (like being intellectually-curious and having interesting and informed things to say on many topics). But not a ton of it is actually needed for many kinds of extremely valuable community building, in my experience (which seems to differ from e.g. Oliver’s). And I think people who emphasize the value of keeping up with direct work sometimes conflate the value of e.g. knowing about new directions in AI safety research vs. broader value adds from becoming a more informed person and gaining various intellectual benefits from practice engaging with object-level rather than social problems.
Earlier on in my role at Open Phil, I found it very useful to spend a lot of time thinking through cause prioritization, getting a basic lay of the land on specific causes, thinking through what problems and potential interventions seemed most important and becoming emotionally bought-in on spending my time and effort on them. Additionally, I think the process of thinking through who you trust, and why, and doing early audits that can form the foundation for trust, is challenging but very helpful for doing EA CB work well. And I’m wholly in favor of that, and would guess that most people that don’t do this kind of upfront investment are making an important mistake.
But on the current margin, the time I spend keeping up with e.g. new directions in AI safety research feels substantially less important than spending time on implementation on my core projects, and almost never directly decision-relevant (though there are some exceptions, e.g. I could imagine information that would (and, historically, has) update(d) me a lot about AI timelines, and this would flow through to making different decisions in concrete ways). And examining what’s going on with that, it seems like most decisions I make as a community-building grantmaker are too crude to be affected much by additional info at the relevant level of granularity intra-cause, and when I think about lots of other community-building-related decisions, the same seems true.
For example, if I ask a bunch of AI safety researchers what kinds of people they would like to join their teams, they often say pretty similar versions of “very smart, hardworking people who grok our goals, who are extremely gifted in a field like math or CS”. And I’m like “wow , that’s very intuitive, and has been true for years, without changing”. Subtle differences between alignment agendas do not, in my experience, bear out enough in people’s ideas about what kinds of recruitment are good that I’ve found it to be a good use of time to dig in on. This is especially true given that places where informed, intelligent people who have various important-to-me markers of trustworthiness differ are places where I find that it’s particularly difficult for an outsider to gain much justified confidence.
Another testbed is that I spend a few years spending a lot of time on Open Phil’s biosecurity strategy, and I formed a lot of my own, pretty nuanced and intricate views about it. I’ve never dived as deep on AI. But I notice that I didn’t find my own set of views about biosecurity that helpful for many broader community-building tradeoffs and questions, compared to the counterfactual of trusting the people who seemed best to me to trust in the space (which I think I could have guessed using a bunch of proxies that didn’t involve forming my own models of biosecurity) and catching up with them or interviewing them every 6mo about what it seems helpful to know (which is more similar to what I do with AI). Idk, this feels more like 5-10% of my time, though maybe I absorb additional context via osmosis from social proximity to people doing direct work, and maybe this helpful in ways that aren’t apparent to me.
This seems really exciting, and I agree that it’s an underexplored area. I hope you share resources you develop and things you learn to make it easier for others to start groups like this.
PSA for people reading this thread in the future: Open Phil is also very open to and excited about supporting AI safety student groups (as well as other groups that seem helpful for longtermist priority projects); see here for a link to the application form.