This post really resonates with me. Over winter 2021⁄22 I went on a retreat run by folks in the CFAR, MIRI, Lightcone cluster, and came away with some pretty crippling uncertainty about the sign of EA community building.[1] In retrospect, the appropriate response would have been one of the following:
stop to investigate
commit to community building (but maybe stop to investigate upon encountering new information or after some predetermined period of time)
switch jobs
Instead, I continued on in my community building role, but with less energy, and with a constant cloud of uncertainty hanging over me. This was not good for my work outputs, my mental health, or my interactions and relationships with community building colleagues. Accordingly, “the correct response to uncertainty is *not* half-speed” is perhaps the number one piece of advice I’d give to my start-of-2022 self. I’m happy to see this advice so well elucidated here.
To be clear, I don’t believe the retreat was “bad” in any particular way, or that it was designed to propagate any particular views regarding community building, and I have a lot of respect for these Rationalist folks.
Firstly, I’ll say that I chose not to elaborate in my initial comment because Lizka’s post here is about what to do when faced with uncertainty in general, and I didn’t wish to turn the comments section into a rehash of the various arguments on whether community building in particular – either as a whole in its current state, or specific parts of it – is net positive or negative and to what degree. I’ve also personally moved on from my period of somewhat-debilitating uncertainty, and so I didn’t really want to be faced with replies and thus something of an obligation to re-engage with this debate. On top of this, experience has taught me to tread lightly, since the EA community is tight-knit and many people in this community have jobs in or adjacent to community building.
However, as well as your reply I’ve received two direct messages since posting my comment, from community builders who sound like they’re in similar situations to the one I was in, so perhaps there is value in me elaborating. I’ll try to do that now.
(Note: I think this retreat catalyzed my processing of considerations and related uncertainties that I’d already been harboring. In other words, I don’t think I was hit with a bunch of completely new considerations about community building, or that I overly deferred. Note also: As I mention above, I’ve personally moved on from this topic with respect to my own career choices, and I’m glad to no longer have this weight of uncertainty on my shoulders. Therefore, I probably won’t engage further in the comment thread, if there are more replies. I realize this could be frustrating from an epistemics standpoint, and for that I apologize. Also, the points I list out below are just what I can think of off the top of my head right now, and so they should be viewed as an unpolished part of the picture rather than anything that’s close to complete or authoritative. Finally, I notice that what I write below is pretty critical stuff, and I very much hope that I don’t come across as disparaging of the great efforts being made by many community builders.)
Elaboration:
In a way, community building seemed to me to resemble a Ponzi scheme. My anecdata suggested that EA fellowships/intro programmes tended to disproportionately engage people who enjoy talking about moral philosophy and EA ideas, and who enjoy getting more people to think about EA ideas. A simplistic model here is that EA fellowships result in more EA community builders which results in more EA fellowships which result in more EA community builders, and so on. Meanwhile, the actual problems, such as factory farming and x-risk, haven’t gone away. Me-at-the-time began to feel skeptical that the community building cycle I was witnessing was an efficient way to make progress on solving the actual problems.
Community building is a blast. I think this makes it easy to motivated-reason one’s way into pursuing it. At the time, my alternative to community building was a research role. Research, however, is hard work. (For me, at least. There may well be better researchers out there who don’t relate.) On the other hand, doing community building meant connecting with lots of cool new people and having interesting conversations and going on fun retreats around the world with other community builders. I also met my last two romantic partners through community building.[1]
To me, the direction of community building while I was involved felt like a fairly indiscriminate “more programs, more participants, more events, more hype”. (I’ll avoid giving concrete examples publicly, since that feels like straying into personal attack territory.) My sense was that there wasn’t enough application of cause prio or enough serious thought going into understanding the pipelines and talent bottlenecks within cause areas. I felt deeply uncertain about whether the proxy goals I was being encouraged to aim for – running a certain number of workshops, or attracting and retaining a certain number of participants – mapped all that well onto solving the actual problems.
My attempts to raise this concern with other community builders, including those above me, were mostly dismissed. This worried me. It seemed like the community building machine was not open to the hypothesis that (some of) what it was doing might be ineffective, or, worse, net negative. (More on the latter below.) On top of this, there seemed to be a tricky second-order effect at play: evaporative cooling whereby the community builders who developed concerns like mine exited, only to be replaced by more bullish community builders. The result: a disproportionately bullish community building machine. And there didn’t appear to be any countermeasures in place. For example, there was plenty of funding available if one wanted a paid role doing community building. But, in addition to the social disincentive, there was no funding available for evaluating/critiquing the impact of community building – at least, not that I was aware of.
I’d grown uncertain about whether the EA and AI safety communities had done more good than harm to date. Therefore, based on reference class forecasting, I’d grown uncertain as to whether the sign of future EA and AI safety activities would be net positive. I had significant (maybe ~33%) credence in these communities, if they continued to exist in roughly the same form, being negative for the world. “Shutting Down the Lightcone Offices” expresses similar thoughts to those I was having.
This post really resonates with me. Over winter 2021⁄22 I went on a retreat run by folks in the CFAR, MIRI, Lightcone cluster, and came away with some pretty crippling uncertainty about the sign of EA community building.[1] In retrospect, the appropriate response would have been one of the following:
stop to investigate
commit to community building (but maybe stop to investigate upon encountering new information or after some predetermined period of time)
switch jobs
Instead, I continued on in my community building role, but with less energy, and with a constant cloud of uncertainty hanging over me. This was not good for my work outputs, my mental health, or my interactions and relationships with community building colleagues. Accordingly, “the correct response to uncertainty is *not* half-speed” is perhaps the number one piece of advice I’d give to my start-of-2022 self. I’m happy to see this advice so well elucidated here.
To be clear, I don’t believe the retreat was “bad” in any particular way, or that it was designed to propagate any particular views regarding community building, and I have a lot of respect for these Rationalist folks.
If you can, could you elaborate more on what caused this uncertainty at/after the retreat?
Firstly, I’ll say that I chose not to elaborate in my initial comment because Lizka’s post here is about what to do when faced with uncertainty in general, and I didn’t wish to turn the comments section into a rehash of the various arguments on whether community building in particular – either as a whole in its current state, or specific parts of it – is net positive or negative and to what degree. I’ve also personally moved on from my period of somewhat-debilitating uncertainty, and so I didn’t really want to be faced with replies and thus something of an obligation to re-engage with this debate. On top of this, experience has taught me to tread lightly, since the EA community is tight-knit and many people in this community have jobs in or adjacent to community building.
However, as well as your reply I’ve received two direct messages since posting my comment, from community builders who sound like they’re in similar situations to the one I was in, so perhaps there is value in me elaborating. I’ll try to do that now.
(Note: I think this retreat catalyzed my processing of considerations and related uncertainties that I’d already been harboring. In other words, I don’t think I was hit with a bunch of completely new considerations about community building, or that I overly deferred. Note also: As I mention above, I’ve personally moved on from this topic with respect to my own career choices, and I’m glad to no longer have this weight of uncertainty on my shoulders. Therefore, I probably won’t engage further in the comment thread, if there are more replies. I realize this could be frustrating from an epistemics standpoint, and for that I apologize. Also, the points I list out below are just what I can think of off the top of my head right now, and so they should be viewed as an unpolished part of the picture rather than anything that’s close to complete or authoritative. Finally, I notice that what I write below is pretty critical stuff, and I very much hope that I don’t come across as disparaging of the great efforts being made by many community builders.)
Elaboration:
In a way, community building seemed to me to resemble a Ponzi scheme. My anecdata suggested that EA fellowships/intro programmes tended to disproportionately engage people who enjoy talking about moral philosophy and EA ideas, and who enjoy getting more people to think about EA ideas. A simplistic model here is that EA fellowships result in more EA community builders which results in more EA fellowships which result in more EA community builders, and so on. Meanwhile, the actual problems, such as factory farming and x-risk, haven’t gone away. Me-at-the-time began to feel skeptical that the community building cycle I was witnessing was an efficient way to make progress on solving the actual problems.
Community building is a blast. I think this makes it easy to motivated-reason one’s way into pursuing it. At the time, my alternative to community building was a research role. Research, however, is hard work. (For me, at least. There may well be better researchers out there who don’t relate.) On the other hand, doing community building meant connecting with lots of cool new people and having interesting conversations and going on fun retreats around the world with other community builders. I also met my last two romantic partners through community building.[1]
To me, the direction of community building while I was involved felt like a fairly indiscriminate “more programs, more participants, more events, more hype”. (I’ll avoid giving concrete examples publicly, since that feels like straying into personal attack territory.) My sense was that there wasn’t enough application of cause prio or enough serious thought going into understanding the pipelines and talent bottlenecks within cause areas. I felt deeply uncertain about whether the proxy goals I was being encouraged to aim for – running a certain number of workshops, or attracting and retaining a certain number of participants – mapped all that well onto solving the actual problems.
My attempts to raise this concern with other community builders, including those above me, were mostly dismissed. This worried me. It seemed like the community building machine was not open to the hypothesis that (some of) what it was doing might be ineffective, or, worse, net negative. (More on the latter below.) On top of this, there seemed to be a tricky second-order effect at play: evaporative cooling whereby the community builders who developed concerns like mine exited, only to be replaced by more bullish community builders. The result: a disproportionately bullish community building machine. And there didn’t appear to be any countermeasures in place. For example, there was plenty of funding available if one wanted a paid role doing community building. But, in addition to the social disincentive, there was no funding available for evaluating/critiquing the impact of community building – at least, not that I was aware of.
I’d grown uncertain about whether the EA and AI safety communities had done more good than harm to date. Therefore, based on reference class forecasting, I’d grown uncertain as to whether the sign of future EA and AI safety activities would be net positive. I had significant (maybe ~33%) credence in these communities, if they continued to exist in roughly the same form, being negative for the world. “Shutting Down the Lightcone Offices” expresses similar thoughts to those I was having.
For a related comment I wrote, see here.