My first-pass response is that this is mostly covered by:
It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think it’s important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of what’s needed for that (I don’t actually have enough resolution on whether you’re doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So I’m quite happy with there being specialized roles within the one camp. I don’t think there should be two radically different camps within the one camp. (Where the defining feature of “two camps” is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.
>It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren’t trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers … basically if a professional could do a great job as a service without being value aligned, then I don’t think it’s making calls about what kind of community building needs to happen.
I don’t mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I’d be happy dipping under 20%, but wouldn’t be very happy dipping under 10%. (If it’s low figures like this it’s less likely that they’ll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
I guess I think there’s a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it’s aiming for. If someone is specializing in vegan catering for EA events then I think it’s great if they don’t have models of what it’s all in service of, because it’s pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess I’m now thinking my claim should be more like “the fraction should vary with how high-level the choices you’re making are” and provide some examples of reasonable points along that spectrum?
My first-pass response is that this is mostly covered by:
(Perhaps I should have called out building infrastructure as an important type of this.)
Now, I do think it’s important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of what’s needed for that (I don’t actually have enough resolution on whether you’re doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.
So I’m quite happy with there being specialized roles within the one camp. I don’t think there should be two radically different camps within the one camp. (Where the defining feature of “two camps” is that people overwhelmingly spend time talking to people in their camp not the other camp.)
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
[Speaking for myself not Oliver …]
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.
>It’s fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn’t be the ones making the calls about what kind of community-building work needs to happen
I think this could be worth calling out more directly and emphatically. I think a large fraction (idk, between 25 and 70%) of people who do community-building work aren’t trying to make calls about what kinds of community-building work needs to happen.
Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.
My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers … basically if a professional could do a great job as a service without being value aligned, then I don’t think it’s making calls about what kind of community building needs to happen.
I don’t mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I’d be happy dipping under 20%, but wouldn’t be very happy dipping under 10%. (If it’s low figures like this it’s less likely that they’ll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)
Do you think we have a substantive disagreement?
I guess I think there’s a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it’s aiming for. If someone is specializing in vegan catering for EA events then I think it’s great if they don’t have models of what it’s all in service of, because it’s pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.
I guess I’m now thinking my claim should be more like “the fraction should vary with how high-level the choices you’re making are” and provide some examples of reasonable points along that spectrum?