It seems like my job for the past 4 years (building software for the EA community, for most of that, the EA Forum) has been pretty much a âCommunity Buildingâ track job.
You might argue that I should view myself as primarily working on a specific cause area, and that I should have spent 20% of my time working on it (for me it would be AI alignment), but that would be pretty non-obvious. And in any case, I would still look radically different from someone primarily focused on doing alignment research directly.
You might call this a quibble and say you could fit this into a âone-campâ model, but I think thereâs a pretty big problem for the model if the response is âthereâs one camp, just with two radically different camps within the one campâ.
I donât really disagree with a directional push of this post, that a substantial fraction of community builders should dramatically increase the amount they try to learn relatively in-the-weeds details of object-level causes.
Clearly as a purely factual matter, there is a community-building track, albeit one that doesnât currently have a ton of rolesâthe title is an overstatement.
My point is that itâs not separate. People doing community building can (and should) talk a bunch to people focused on direct work. And we should see some of people moving backwards and forwards between community building and more direct work.
I think if we take a snapshot in 2022 it looks a bit more like thereâs a community-building track. So arguably my title is aspirational. But I think the presence or absence of a âtrackâ (that people make career decisions based on) is a fact spanning years/âdecades, and my best guess is that (for the kind of reasons articulated here) weâll see more integration of these areas, and the title will be revealed as true with time.
Overall: playing a bit fast and loose, blurring aspirations with current reporting. But I think itâs more misleading to say âthere is a separate community-building trackâ than to say there isnât. (The more epistemically virtuous thing to say would be that itâs unclear if there is, and I hope there isnât.)
BTW I agree that the title is flawed, but donât have something I feel comparably good about overall. But if you have a suggestion I like Iâll change it.
(Maybe I should just change âtrackâ in the title to âcampâ? Feels borderline to me.)
I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.
Maybe something like âMost community building should be shot through by direct workââor something more distantly related to that.
Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.
I do think that still makes them sound more separate than idealâwhile I think many people should be specializing towards community building or direct work, I think that specialized to community building should typically involve a good amount of time paying close attention to direct work, and I think that specialized to direct work should in many cases involve a good amount of time looking to lever knowledge to inform community building.
To gesture at (part of) this intuition I think that some of the best content we have for community building includes The Precipice, HPMoR, and Cold Takes. In all cases these were written by people who went deep on object-level. I donât think this is a coincidence, and while I donât think all community-building content needs that level of expertise to produce well, I think that if we were trying to just use material written by specialized community builders (as one might imagine would be more efficient, since presumably theyâll know best how to reach the relevant audiences, etc.) weâd be in much worse shape.
Yeah, I get that. I guess itâs not exactly inconsistent with the shot through formulation, but probably itâs a matter of taste how to frame it so that the emphasis gets right.
After reflecting further and talking to people I changed âtrackâ in the title to âcampâ; I think this more accurately conveys the point Iâm making.
Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, itâs often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.
Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because Iâm not and havenât been an Alignment researcher (and perhaps Oli hasnât been either, but heâs a lot more engaged with the field than I am).
My first-pass response is that this is mostly covered by:
Itâs fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldnât be the ones making the calls about what kind of community-building work needs to happen
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think itâs important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of whatâs needed for that (I donât actually have enough resolution on whether youâre doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So Iâm quite happy with there being specialized roles within the one camp. I donât think there should be two radically different camps within the one camp. (Where the defining feature of âtwo campsâ is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
Iâve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isnât really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I donât think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to âparticipate more in object level discussionâ.
For example: did you think that I/âthe forum was more impactful after I spent a week doing ELK? If the answer is âno,â is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I donât think the payoff is particularly guaranteed, so my guess would be âYesâ.
Thinks like engaging with ELK, thinking through Eliezerâs List Oâ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say âparticipate in object level discussionsâ you mean less âbe able to find the flaw in the proof of some theoremâ and more âbe able to state whatâs holding us back from having more/âbetter theoremsâ? That seems more compelling to me.
I guess that a week doing ELK would help on thisâprobably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose youâd get more out of spending half a week doing ELK and half a week talking to people about models of whether/âwhy ELK helps anything, what makes for good progress on ELK, what makes for someone whoâs likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that âtalking to people about models of whether ELK helps anythingâ would fall into a âcommunity building track,â but upon rereading your post more closely I donât think that was the intended interpretation.[1]
FWIW the âonly one trackâ model doesnât perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think itâs fair to describe them as being on the âcreate a delivery appâ track rather than the âbe a delivery driverâ track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being âsuper customer focusedâ or something, rather than having only one âtrackâ.
You say âobsessing over the details of whatâs needed in direct work,â and talking to experts definitely seems like an activity that falls in that category.
>Itâs fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldnât be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work arenât trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers ⊠basically if a professional could do a great job as a service without being value aligned, then I donât think itâs making calls about what kind of community building needs to happen.
I donât mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe Iâd be happy dipping under 20%, but wouldnât be very happy dipping under 10%. (If itâs low figures like this itâs less likely that theyâll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
I guess I think thereâs a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what itâs aiming for. If someone is specializing in vegan catering for EA events then I think itâs great if they donât have models of what itâs all in service of, because itâs pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess Iâm now thinking my claim should be more like âthe fraction should vary with how high-level the choices youâre making areâ and provide some examples of reasonable points along that spectrum?
(Weakly-held view)
It seems like my job for the past 4 years (building software for the EA community, for most of that, the EA Forum) has been pretty much a âCommunity Buildingâ track job.
You might argue that I should view myself as primarily working on a specific cause area, and that I should have spent 20% of my time working on it (for me it would be AI alignment), but that would be pretty non-obvious. And in any case, I would still look radically different from someone primarily focused on doing alignment research directly.
You might call this a quibble and say you could fit this into a âone-campâ model, but I think thereâs a pretty big problem for the model if the response is âthereâs one camp, just with two radically different camps within the one campâ.
I donât really disagree with a directional push of this post, that a substantial fraction of community builders should dramatically increase the amount they try to learn relatively in-the-weeds details of object-level causes.
Clearly as a purely factual matter, there is a community-building track, albeit one that doesnât currently have a ton of rolesâthe title is an overstatement.
My point is that itâs not separate. People doing community building can (and should) talk a bunch to people focused on direct work. And we should see some of people moving backwards and forwards between community building and more direct work.
I think if we take a snapshot in 2022 it looks a bit more like thereâs a community-building track. So arguably my title is aspirational. But I think the presence or absence of a âtrackâ (that people make career decisions based on) is a fact spanning years/âdecades, and my best guess is that (for the kind of reasons articulated here) weâll see more integration of these areas, and the title will be revealed as true with time.
Overall: playing a bit fast and loose, blurring aspirations with current reporting. But I think itâs more misleading to say âthere is a separate community-building trackâ than to say there isnât. (The more epistemically virtuous thing to say would be that itâs unclear if there is, and I hope there isnât.)
BTW I agree that the title is flawed, but donât have something I feel comparably good about overall. But if you have a suggestion I like Iâll change it.
(Maybe I should just change âtrackâ in the title to âcampâ? Feels borderline to me.)
Could change âthere is noâ to âagainst theâ or âletâs not have aâ?
Thanks, changed to âletâs not have a âŠâ
I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.
Maybe something like âMost community building should be shot through by direct workââor something more distantly related to that.
Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.
I do think that still makes them sound more separate than idealâwhile I think many people should be specializing towards community building or direct work, I think that specialized to community building should typically involve a good amount of time paying close attention to direct work, and I think that specialized to direct work should in many cases involve a good amount of time looking to lever knowledge to inform community building.
To gesture at (part of) this intuition I think that some of the best content we have for community building includes The Precipice, HPMoR, and Cold Takes. In all cases these were written by people who went deep on object-level. I donât think this is a coincidence, and while I donât think all community-building content needs that level of expertise to produce well, I think that if we were trying to just use material written by specialized community builders (as one might imagine would be more efficient, since presumably theyâll know best how to reach the relevant audiences, etc.) weâd be in much worse shape.
Yeah, I get that. I guess itâs not exactly inconsistent with the shot through formulation, but probably itâs a matter of taste how to frame it so that the emphasis gets right.
After reflecting further and talking to people I changed âtrackâ in the title to âcampâ; I think this more accurately conveys the point Iâm making.
Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, itâs often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.
Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because Iâm not and havenât been an Alignment researcher (and perhaps Oli hasnât been either, but heâs a lot more engaged with the field than I am).
My first-pass response is that this is mostly covered by:
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think itâs important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of whatâs needed for that (I donât actually have enough resolution on whether youâre doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So Iâm quite happy with there being specialized roles within the one camp. I donât think there should be two radically different camps within the one camp. (Where the defining feature of âtwo campsâ is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
Iâve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isnât really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I donât think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to âparticipate more in object level discussionâ.
For example: did you think that I/âthe forum was more impactful after I spent a week doing ELK? If the answer is âno,â is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I donât think the payoff is particularly guaranteed, so my guess would be âYesâ.
Thinks like engaging with ELK, thinking through Eliezerâs List Oâ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say âparticipate in object level discussionsâ you mean less âbe able to find the flaw in the proof of some theoremâ and more âbe able to state whatâs holding us back from having more/âbetter theoremsâ? That seems more compelling to me.
[Speaking for myself not Oliver âŠ]
I guess that a week doing ELK would help on thisâprobably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose youâd get more out of spending half a week doing ELK and half a week talking to people about models of whether/âwhy ELK helps anything, what makes for good progress on ELK, what makes for someone whoâs likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that âtalking to people about models of whether ELK helps anythingâ would fall into a âcommunity building track,â but upon rereading your post more closely I donât think that was the intended interpretation.[1]
FWIW the âonly one trackâ model doesnât perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think itâs fair to describe them as being on the âcreate a delivery appâ track rather than the âbe a delivery driverâ track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being âsuper customer focusedâ or something, rather than having only one âtrackâ.
You say âobsessing over the details of whatâs needed in direct work,â and talking to experts definitely seems like an activity that falls in that category.
>Itâs fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldnât be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work arenât trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers ⊠basically if a professional could do a great job as a service without being value aligned, then I donât think itâs making calls about what kind of community building needs to happen.
I donât mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe Iâd be happy dipping under 20%, but wouldnât be very happy dipping under 10%. (If itâs low figures like this itâs less likely that theyâll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
Do you think we have a substantive disagreement?
I guess I think thereâs a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what itâs aiming for. If someone is specializing in vegan catering for EA events then I think itâs great if they donât have models of what itâs all in service of, because itâs pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess Iâm now thinking my claim should be more like âthe fraction should vary with how high-level the choices youâre making areâ and provide some examples of reasonable points along that spectrum?