I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated.
[Below written super fast, and while a bit sleep deprived]
An overly crude summary of my current picture is “if you do community-building via spoken interactions, it’s somewhere between “helpful” and “necessary” to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, being interested in direct work is correlated with a bunch of positive qualities that help with community-building (like being intellectually-curious and having interesting and informed things to say on many topics). But not a ton of it is actually needed for many kinds of extremely valuable community building, in my experience (which seems to differ from e.g. Oliver’s). And I think people who emphasize the value of keeping up with direct work sometimes conflate the value of e.g. knowing about new directions in AI safety research vs. broader value adds from becoming a more informed person and gaining various intellectual benefits from practice engaging with object-level rather than social problems.
Earlier on in my role at Open Phil, I found it very useful to spend a lot of time thinking through cause prioritization, getting a basic lay of the land on specific causes, thinking through what problems and potential interventions seemed most important and becoming emotionally bought-in on spending my time and effort on them. Additionally, I think the process of thinking through who you trust, and why, and doing early audits that can form the foundation for trust, is challenging but very helpful for doing EA CB work well. And I’m wholly in favor of that, and would guess that most people that don’t do this kind of upfront investment are making an important mistake.
But on the current margin, the time I spend keeping up with e.g. new directions in AI safety research feels substantially less important than spending time on implementation on my core projects, and almost never directly decision-relevant (though there are some exceptions, e.g. I could imagine information that would (and, historically, has) update(d) me a lot about AI timelines, and this would flow through to making different decisions in concrete ways). And examining what’s going on with that, it seems like most decisions I make as a community-building grantmaker are too crude to be affected much by additional info at the relevant level of granularity intra-cause, and when I think about lots of other community-building-related decisions, the same seems true.
For example, if I ask a bunch of AI safety researchers what kinds of people they would like to join their teams, they often say pretty similar versions of “very smart, hardworking people who grok our goals, who are extremely gifted in a field like math or CS”. And I’m like “wow , that’s very intuitive, and has been true for years, without changing”. Subtle differences between alignment agendas do not, in my experience, bear out enough in people’s ideas about what kinds of recruitment are good that I’ve found it to be a good use of time to dig in on. This is especially true given that places where informed, intelligent people who have various important-to-me markers of trustworthiness differ are places where I find that it’s particularly difficult for an outsider to gain much justified confidence.
Another testbed is that I spend a few years spending a lot of time on Open Phil’s biosecurity strategy, and I formed a lot of my own, pretty nuanced and intricate views about it. I’ve never dived as deep on AI. But I notice that I didn’t find my own set of views about biosecurity that helpful for many broader community-building tradeoffs and questions, compared to the counterfactual of trusting the people who seemed best to me to trust in the space (which I think I could have guessed using a bunch of proxies that didn’t involve forming my own models of biosecurity) and catching up with them or interviewing them every 6mo about what it seems helpful to know (which is more similar to what I do with AI). Idk, this feels more like 5-10% of my time, though maybe I absorb additional context via osmosis from social proximity to people doing direct work, and maybe this helpful in ways that aren’t apparent to me.
3-5% of time talking to key object level people feels very useful. I think I did too little of this after covid started and I stopped going to in-person conferences (and didn’t set up a compensating set of meetings), and that was a mistake.
Considering my options going forward, I now have the opportunity to spend serious time learning about object level issues, but it usually seems like the best way to do that is just to speak to lots of people in the area and read about them, rather than ‘do’ object level work itself. (My understanding is you see this as in the scope of your 20%, but I think the vibe of “go and research an object level issue so you’re better informed for your community building work” is pretty different from “spend more time doing object level work”).
At the same time, there are lots of community building projects I could do that seem valuable and don’t require extra knowledge, so the opportunity cost seems high. (Also agree with Claire there are lots of straightforward priorities that just need to be executed on.)
Having learning be driven by specific needs often seems more efficient than open ended learning e.g. if I want to write more about biosecurity (in order to movement build), then I’ll learn more about biosecurity as I’m doing that vs. going to work at Alvea for a few years.
It could be that I’m underweighting the long-term benefits of being generally better informed or non-directed exploration & learning.
From my experience running a team, I think encouraging staff to spend 20% of time on object-level work would be a big cost. Focus and momentum are really important for productivity. In a larger team, it’s a challenge even to have just 30-60% of your time free for pushing forward your top priority (lower end for senior staff), so putting 20% of time into a side-project means losing 33-66% of your “actually pushing your top priority forward” time, which would really slow down the org. The benefits would need to be pretty huge.
I’d be more excited about:
Staff have “10% time” for interest-driven projects or personal dev, and maybe these should focus more on object level work.
Early career, people aim to do work for a few years in an object-level area as part of exploration, so they’ve experienced that before doing community building for a while.
Doing more switching back and forth also seems reasonable, though seems less obvious.
In my experience hiring, it’s great when someone (e.g. an advisor, researcher) has experience of one of the object level areas, but it’s also great when someone has experience getting people interested in EA, or in community building skills like marketing, giving talks, writing, operations etc. It’s not obvious to me it would be better to prioritise hiring people with object-level experience a lot more vs. these other types of experience.
I agree it’s pretty clear that you’re not currently in a position where you should consider learning by going into direct work for years
I agree that the things you say you’d be more excited about are lower hanging fruit than asking professional community builders to spend 20% of their time on object-level stuff
OTOH my gut impression (may well be wrong) is that if 80k doubled its knowledge of object-level priorities (without taking up any time to do so) that would probably increase its impact by something like 30%. So from this perspective spending just 3-5% of time on keeping up with stuff feels like it’s maybe a bit of an under-investment (although maybe that’s correct if you’re doing it for a few years and then spending time in a position which gives you space to go deeper).
One nuance: activity which looks like “let people know there’s this community and their basic principles, in order that the people who would be a natural fit get to hear about it” feels to me like I want to put it in the education rather than community-building bucket. Because if you’re aiming for these intermediate variables like broad understanding rather than a particular shape of a community, then it’s less important to have nuanced takes on what the community should look like. So for that type of work I less want to defend anything like 20% (although I’m still often into people who are going to do that spending a bunch of time earlier in their careers going deep on some of the object-level).
1) Just to clarify, I don’t think 80k staff should only spend 3-5% of time keeping up on object level things in total. That’s just the allocation to meetings and conferences.
In practice, many staff have to learn about object level stuff as part of their job (e.g. if writing a problem profile, interviewing a podcast guest, figuring out what to do one-on-one) – I’m pro learning that’s integrated into your mainline job.
I also think people could spend some of their ten percent time learning about object level stuff and that would be good.
So a bunch probably end up at 20%+, though usually the majority of the knowledge is accumulated indirectly.
2) +30% gain was actually less than I might have expected you’d say. Spending, say, 10% of time to get a 30% gain only sounds like a so-so use of attention to me. My personal take would be that 80k managers should focus on things with either bigger bottom line gains (e.g. how to triple their programme as quickly as possible) or higher ROIs that that.
I thought the worry might be that we’d miss out on tail people, and so end up with, say 90% less impact in the long-term, or something like that.
3) Hmm seems like most of what 80k and I have done is actually education rather than community-building on this breakdown.
Re. 2), I think the relevant figure will vary by activity. 30% is a not-super-well-considered figure chosen for 80k, and I think I was skewing conservative … really I’m something like “more than +20% per doubling, less than +100%”. Losing 90% of the impact would be more imaginable if we couldn’t just point outliery people to different intros, and would be a stretch even then.
Thanks, really appreciated this (strong upvoted for the granularity of data).
To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn’t talk socially a bunch to people doing direct work I’d be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time.
If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are:
Understanding not just “who is needed to join AI safety teams?”, but “what’s needed in people who can start (great) new AI safety teams?”
Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas
Something about long-term model-building which doesn’t pay off in the short term but you’d find helpful in five years time
Overall I’m not sure if I should be altering my “20%” claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but “20%” still feels like a good gesture as a default.
(I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I’m a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)
I used to agree more with the thrust of this post than I do, and now I think this is somewhat overstated.
[Below written super fast, and while a bit sleep deprived]
An overly crude summary of my current picture is “if you do community-building via spoken interactions, it’s somewhere between “helpful” and “necessary” to have a substantially deeper understanding of the relevant direct work than the people you are trying to build community with, and also to be the kind of person they think is impressive, worth listening to, and admirable. Additionally, being interested in direct work is correlated with a bunch of positive qualities that help with community-building (like being intellectually-curious and having interesting and informed things to say on many topics). But not a ton of it is actually needed for many kinds of extremely valuable community building, in my experience (which seems to differ from e.g. Oliver’s). And I think people who emphasize the value of keeping up with direct work sometimes conflate the value of e.g. knowing about new directions in AI safety research vs. broader value adds from becoming a more informed person and gaining various intellectual benefits from practice engaging with object-level rather than social problems.
Earlier on in my role at Open Phil, I found it very useful to spend a lot of time thinking through cause prioritization, getting a basic lay of the land on specific causes, thinking through what problems and potential interventions seemed most important and becoming emotionally bought-in on spending my time and effort on them. Additionally, I think the process of thinking through who you trust, and why, and doing early audits that can form the foundation for trust, is challenging but very helpful for doing EA CB work well. And I’m wholly in favor of that, and would guess that most people that don’t do this kind of upfront investment are making an important mistake.
But on the current margin, the time I spend keeping up with e.g. new directions in AI safety research feels substantially less important than spending time on implementation on my core projects, and almost never directly decision-relevant (though there are some exceptions, e.g. I could imagine information that would (and, historically, has) update(d) me a lot about AI timelines, and this would flow through to making different decisions in concrete ways). And examining what’s going on with that, it seems like most decisions I make as a community-building grantmaker are too crude to be affected much by additional info at the relevant level of granularity intra-cause, and when I think about lots of other community-building-related decisions, the same seems true.
For example, if I ask a bunch of AI safety researchers what kinds of people they would like to join their teams, they often say pretty similar versions of “very smart, hardworking people who grok our goals, who are extremely gifted in a field like math or CS”. And I’m like “wow , that’s very intuitive, and has been true for years, without changing”. Subtle differences between alignment agendas do not, in my experience, bear out enough in people’s ideas about what kinds of recruitment are good that I’ve found it to be a good use of time to dig in on. This is especially true given that places where informed, intelligent people who have various important-to-me markers of trustworthiness differ are places where I find that it’s particularly difficult for an outsider to gain much justified confidence.
Another testbed is that I spend a few years spending a lot of time on Open Phil’s biosecurity strategy, and I formed a lot of my own, pretty nuanced and intricate views about it. I’ve never dived as deep on AI. But I notice that I didn’t find my own set of views about biosecurity that helpful for many broader community-building tradeoffs and questions, compared to the counterfactual of trusting the people who seemed best to me to trust in the space (which I think I could have guessed using a bunch of proxies that didn’t involve forming my own models of biosecurity) and catching up with them or interviewing them every 6mo about what it seems helpful to know (which is more similar to what I do with AI). Idk, this feels more like 5-10% of my time, though maybe I absorb additional context via osmosis from social proximity to people doing direct work, and maybe this helpful in ways that aren’t apparent to me.
A lot of what Claire says rings true to me.
Just to focus on my experience:
3-5% of time talking to key object level people feels very useful. I think I did too little of this after covid started and I stopped going to in-person conferences (and didn’t set up a compensating set of meetings), and that was a mistake.
Considering my options going forward, I now have the opportunity to spend serious time learning about object level issues, but it usually seems like the best way to do that is just to speak to lots of people in the area and read about them, rather than ‘do’ object level work itself. (My understanding is you see this as in the scope of your 20%, but I think the vibe of “go and research an object level issue so you’re better informed for your community building work” is pretty different from “spend more time doing object level work”).
At the same time, there are lots of community building projects I could do that seem valuable and don’t require extra knowledge, so the opportunity cost seems high. (Also agree with Claire there are lots of straightforward priorities that just need to be executed on.)
Having learning be driven by specific needs often seems more efficient than open ended learning e.g. if I want to write more about biosecurity (in order to movement build), then I’ll learn more about biosecurity as I’m doing that vs. going to work at Alvea for a few years.
It could be that I’m underweighting the long-term benefits of being generally better informed or non-directed exploration & learning.
From my experience running a team, I think encouraging staff to spend 20% of time on object-level work would be a big cost. Focus and momentum are really important for productivity. In a larger team, it’s a challenge even to have just 30-60% of your time free for pushing forward your top priority (lower end for senior staff), so putting 20% of time into a side-project means losing 33-66% of your “actually pushing your top priority forward” time, which would really slow down the org. The benefits would need to be pretty huge.
I’d be more excited about:
Staff have “10% time” for interest-driven projects or personal dev, and maybe these should focus more on object level work.
Early career, people aim to do work for a few years in an object-level area as part of exploration, so they’ve experienced that before doing community building for a while.
Doing more switching back and forth also seems reasonable, though seems less obvious.
In my experience hiring, it’s great when someone (e.g. an advisor, researcher) has experience of one of the object level areas, but it’s also great when someone has experience getting people interested in EA, or in community building skills like marketing, giving talks, writing, operations etc. It’s not obvious to me it would be better to prioritise hiring people with object-level experience a lot more vs. these other types of experience.
Salient points of agreement:
I agree it’s pretty clear that you’re not currently in a position where you should consider learning by going into direct work for years
I agree that the things you say you’d be more excited about are lower hanging fruit than asking professional community builders to spend 20% of their time on object-level stuff
OTOH my gut impression (may well be wrong) is that if 80k doubled its knowledge of object-level priorities (without taking up any time to do so) that would probably increase its impact by something like 30%. So from this perspective spending just 3-5% of time on keeping up with stuff feels like it’s maybe a bit of an under-investment (although maybe that’s correct if you’re doing it for a few years and then spending time in a position which gives you space to go deeper).
One nuance: activity which looks like “let people know there’s this community and their basic principles, in order that the people who would be a natural fit get to hear about it” feels to me like I want to put it in the education rather than community-building bucket. Because if you’re aiming for these intermediate variables like broad understanding rather than a particular shape of a community, then it’s less important to have nuanced takes on what the community should look like. So for that type of work I less want to defend anything like 20% (although I’m still often into people who are going to do that spending a bunch of time earlier in their careers going deep on some of the object-level).
That’s useful.
1) Just to clarify, I don’t think 80k staff should only spend 3-5% of time keeping up on object level things in total. That’s just the allocation to meetings and conferences.
In practice, many staff have to learn about object level stuff as part of their job (e.g. if writing a problem profile, interviewing a podcast guest, figuring out what to do one-on-one) – I’m pro learning that’s integrated into your mainline job.
I also think people could spend some of their ten percent time learning about object level stuff and that would be good.
So a bunch probably end up at 20%+, though usually the majority of the knowledge is accumulated indirectly.
2) +30% gain was actually less than I might have expected you’d say. Spending, say, 10% of time to get a 30% gain only sounds like a so-so use of attention to me. My personal take would be that 80k managers should focus on things with either bigger bottom line gains (e.g. how to triple their programme as quickly as possible) or higher ROIs that that.
I thought the worry might be that we’d miss out on tail people, and so end up with, say 90% less impact in the long-term, or something like that.
3) Hmm seems like most of what 80k and I have done is actually education rather than community-building on this breakdown.
Re. 2), I think the relevant figure will vary by activity. 30% is a not-super-well-considered figure chosen for 80k, and I think I was skewing conservative … really I’m something like “more than +20% per doubling, less than +100%”. Losing 90% of the impact would be more imaginable if we couldn’t just point outliery people to different intros, and would be a stretch even then.
Thanks, really appreciated this (strong upvoted for the granularity of data).
To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn’t talk socially a bunch to people doing direct work I’d be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time.
If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are:
Understanding not just “who is needed to join AI safety teams?”, but “what’s needed in people who can start (great) new AI safety teams?”
Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas
Something about long-term model-building which doesn’t pay off in the short term but you’d find helpful in five years time
Overall I’m not sure if I should be altering my “20%” claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but “20%” still feels like a good gesture as a default.
(I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I’m a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)