My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.
My hot-take for the EA Forum team (and for most of CEA in-general) is that it would probably increase its impact on the world a bunch if people on the team participated more in object-level discussions and tried to combine their models of community-building more with their models of direct-work.
I’ve tried pretty hard to stay engaged with the AI Alignment literature and the broader strategic landscape during my work on LessWrong, and I think that turned out to be really important for how I thought about LW strategy.
I indeed think it isn’t really possible for the EA Forum team to not be making calls about what kind of community-building work needs to happen. I don’t think anyone else at CEA really has the context to think about the impact of various features on the EA Forum, and the team is inevitably going to have to make a lot of decisions that will have a big influence on the community, in a way that makes it hard to defer.
I would find it helpful to have more precision about what it means to “participate more in object level discussion”.
For example: did you think that I/the forum was more impactful after I spent a week doing ELK? If the answer is “no,” is that because I need to be at the level of winning an ELK prize to see returns in my community building work? Or is it about the amount of time spent rather than my skill level (e.g. I would need to have spent a month rather than a week in order to see a return)?
Definitely in-expectation I would expect the week doing ELK to have had pretty good effects on your community-building, though I don’t think the payoff is particularly guaranteed, so my guess would be “Yes”.
Thinks like engaging with ELK, thinking through Eliezer’s List O’ Doom, thinking through some of the basics of biorisk seem all quite valuable to me, and my takes on those issues are very deeply entangled with a lot of community-building decisions I make, so I expect similar effects for you.
Thanks! I spend a fair amount of time reading technical papers, including the things you mentioned, mostly because I spend a lot of time on airplanes and this is a vaguely productive thing I can do on an airplane, but honestly this just mostly results in me being better able to make TikToks about obscure theorems.
Maybe my confusion is: when you say “participate in object level discussions” you mean less “be able to find the flaw in the proof of some theorem” and more “be able to state what’s holding us back from having more/better theorems”? That seems more compelling to me.
[Speaking for myself not Oliver …]
I guess that a week doing ELK would help on this—probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you’d get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who’s likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Cool, yeah that split makes sense to me. I had originally assumed that “talking to people about models of whether ELK helps anything” would fall into a “community building track,” but upon rereading your post more closely I don’t think that was the intended interpretation.[1]
FWIW the “only one track” model doesn’t perfectly map to my intuition here. E.g. the founders of doordash spent time using their own app as delivery drivers, and that experience was probably quite useful for them, but I still think it’s fair to describe them as being on the “create a delivery app” track rather than the “be a delivery driver” track.
I read you as making an analogous suggestion for EA community builders, and I would describe that as being “super customer focused” or something, rather than having only one “track”.
You say “obsessing over the details of what’s needed in direct work,” and talking to experts definitely seems like an activity that falls in that category.