It seems like my job for the past 4 years (building software for the EA community, for most of that, the EA Forum) has been pretty much a “Community Building” track job.
You might argue that I should view myself as primarily working on a specific cause area, and that I should have spent 20% of my time working on it (for me it would be AI alignment), but that would be pretty non-obvious. And in any case, I would still look radically different from someone primarily focused on doing alignment research directly.
You might call this a quibble and say you could fit this into a “one-camp” model, but I think there’s a pretty big problem for the model if the response is “there’s one camp, just with two radically different camps within the one camp”.
I don’t really disagree with a directional push of this post, that a substantial fraction of community builders should dramatically increase the amount they try to learn relatively in-the-weeds details of object-level causes.
Clearly as a purely factual matter, there is a community-building track, albeit one that doesn’t currently have a ton of roles—the title is an overstatement.
My point is that it’s not separate. People doing community building can (and should) talk a bunch to people focused on direct work. And we should see some of people moving backwards and forwards between community building and more direct work.
I think if we take a snapshot in 2022 it looks a bit more like there’s a community-building track. So arguably my title is aspirational. But I think the presence or absence of a “track” (that people make career decisions based on) is a fact spanning years/decades, and my best guess is that (for the kind of reasons articulated here) we’ll see more integration of these areas, and the title will be revealed as true with time.
Overall: playing a bit fast and loose, blurring aspirations with current reporting. But I think it’s more misleading to say “there is a separate community-building track” than to say there isn’t. (The more epistemically virtuous thing to say would be that it’s unclear if there is, and I hope there isn’t.)
BTW I agree that the title is flawed, but don’t have something I feel comparably good about overall. But if you have a suggestion I like I’ll change it.
(Maybe I should just change “track” in the title to “camp”? Feels borderline to me.)
I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.
Maybe something like “Most community building should be shot through by direct work”—or something more distantly related to that.
Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.
I do think that still makes them sound more separate than ideal—while I think many people should be specializing towards community building or direct work, I think that specialized to community building should typically involve a good amount of time paying close attention to direct work, and I think that specialized to direct work should in many cases involve a good amount of time looking to lever knowledge to inform community building.
To gesture at (part of) this intuition I think that some of the best content we have for community building includes The Precipice, HPMoR, and Cold Takes. In all cases these were written by people who went deep on object-level. I don’t think this is a coincidence, and while I don’t think all community-building content needs that level of expertise to produce well, I think that if we were trying to just use material written by specialized community builders (as one might imagine would be more efficient, since presumably they’ll know best how to reach the relevant audiences, etc.) we’d be in much worse shape.
Yeah, I get that. I guess it’s not exactly inconsistent with the shot through formulation, but probably it’s a matter of taste how to frame it so that the emphasis gets right.
Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, it’s often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.
Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because I’m not and haven’t been an Alignment researcher (and perhaps Oli hasn’t been either, but he’s a lot more engaged with the field than I am).
My first-pass response is that this is mostly covered by:
It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think it’s important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of what’s needed for that (I don’t actually have enough resolution on whether you’re doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So I’m quite happy with there being specialized roles within the one camp. I don’t think there should be two radically different camps within the one camp. (Where the defining feature of “two camps” is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.
>It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren’t trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers … basically if a professional could do a great job as a service without being value aligned, then I don’t think it’s making calls about what kind of community building needs to happen.
I don’t mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I’d be happy dipping under 20%, but wouldn’t be very happy dipping under 10%. (If it’s low figures like this it’s less likely that they’ll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
I guess I think there’s a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it’s aiming for. If someone is specializing in vegan catering for EA events then I think it’s great if they don’t have models of what it’s all in service of, because it’s pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess I’m now thinking my claim should be more like “the fraction should vary with how high-level the choices you’re making are” and provide some examples of reasonable points along that spectrum?
(Weakly-held view)
It seems like my job for the past 4 years (building software for the EA community, for most of that, the EA Forum) has been pretty much a “Community Building” track job.
You might argue that I should view myself as primarily working on a specific cause area, and that I should have spent 20% of my time working on it (for me it would be AI alignment), but that would be pretty non-obvious. And in any case, I would still look radically different from someone primarily focused on doing alignment research directly.
You might call this a quibble and say you could fit this into a “one-camp” model, but I think there’s a pretty big problem for the model if the response is “there’s one camp, just with two radically different camps within the one camp”.
I don’t really disagree with a directional push of this post, that a substantial fraction of community builders should dramatically increase the amount they try to learn relatively in-the-weeds details of object-level causes.
Clearly as a purely factual matter, there is a community-building track, albeit one that doesn’t currently have a ton of roles—the title is an overstatement.
My point is that it’s not separate. People doing community building can (and should) talk a bunch to people focused on direct work. And we should see some of people moving backwards and forwards between community building and more direct work.
I think if we take a snapshot in 2022 it looks a bit more like there’s a community-building track. So arguably my title is aspirational. But I think the presence or absence of a “track” (that people make career decisions based on) is a fact spanning years/decades, and my best guess is that (for the kind of reasons articulated here) we’ll see more integration of these areas, and the title will be revealed as true with time.
Overall: playing a bit fast and loose, blurring aspirations with current reporting. But I think it’s more misleading to say “there is a separate community-building track” than to say there isn’t. (The more epistemically virtuous thing to say would be that it’s unclear if there is, and I hope there isn’t.)
BTW I agree that the title is flawed, but don’t have something I feel comparably good about overall. But if you have a suggestion I like I’ll change it.
(Maybe I should just change “track” in the title to “camp”? Feels borderline to me.)
Could change “there is no” to “against the” or “let’s not have a”?
Thanks, changed to “let’s not have a …”
I guess you want to say that most community building needs to be comprehensively informed by knowledge of direct work, not that each person who works in (what can reasonably be called) community building needs to have that knowledge.
Maybe something like “Most community building should be shot through by direct work”—or something more distantly related to that.
Though maybe you feel that still presents direct work and community-building as more separate than ideal. I might not fully buy the one camp model.
I do think that still makes them sound more separate than ideal—while I think many people should be specializing towards community building or direct work, I think that specialized to community building should typically involve a good amount of time paying close attention to direct work, and I think that specialized to direct work should in many cases involve a good amount of time looking to lever knowledge to inform community building.
To gesture at (part of) this intuition I think that some of the best content we have for community building includes The Precipice, HPMoR, and Cold Takes. In all cases these were written by people who went deep on object-level. I don’t think this is a coincidence, and while I don’t think all community-building content needs that level of expertise to produce well, I think that if we were trying to just use material written by specialized community builders (as one might imagine would be more efficient, since presumably they’ll know best how to reach the relevant audiences, etc.) we’d be in much worse shape.
Yeah, I get that. I guess it’s not exactly inconsistent with the shot through formulation, but probably it’s a matter of taste how to frame it so that the emphasis gets right.
After reflecting further and talking to people I changed “track” in the title to “camp”; I think this more accurately conveys the point I’m making.
Yes. I see some parallels between this discussion and the discussion about the importance of researchers being teachers and vice versa in academia. I see the logic of that a bit but also think that in academia, it’s often applied dogmatically and in a way that underrates the benefits of specialisation. Thus while I agree that it can be good to combine community-building and object-level work, I think that that heuristic needs to be applied with some care and on a case-by-case basis.
Fwiw, my role is similar to yours, and granted that LessWrong has a much stronger focus on Alignment, but I currently feel that a very good candidate for the #1 reason that I will fail to steer LW to massive impact is because I’m not and haven’t been an Alignment researcher (and perhaps Oli hasn’t been either, but he’s a lot more engaged with the field than I am).
My first-pass response is that this is mostly covered by:
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think it’s important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of what’s needed for that (I don’t actually have enough resolution on whether you’re doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So I’m quite happy with there being specialized roles within the one camp. I don’t think there should be two radically different camps within the one camp. (Where the defining feature of “two camps” is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
[Speaking for myself not Oliver …]
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.
>It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren’t trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers … basically if a professional could do a great job as a service without being value aligned, then I don’t think it’s making calls about what kind of community building needs to happen.
I don’t mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I’d be happy dipping under 20%, but wouldn’t be very happy dipping under 10%. (If it’s low figures like this it’s less likely that they’ll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
Do you think we have a substantive disagreement?
I guess I think there’s a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it’s aiming for. If someone is specializing in vegan catering for EA events then I think it’s great if they don’t have models of what it’s all in service of, because it’s pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess I’m now thinking my claim should be more like “the fraction should vary with how high-level the choices you’re making are” and provide some examples of reasonable points along that spectrum?