But maybe looking at leadership is the wrong way around, and it’s the rank-and-file members who led the charge.
Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file. There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there saw the global poverty cause as a way to funnel people towards AI risk.
I think the AI-risk faction started to assert itself more strongly in EA from about 2015, successfully persuading other major leader figures one by one over the following years (e.g. Holden in 2016, as linked to by Carl). But by then I wasn’t following EA closely, and I don’t have a good sense of the timeline.
Speaking from my geographically distant perspective: I definitely saw it as a leader-led shift rather than coming from the rank-and-file. There was always a minority of rank-and-file coming from Less Wrong who saw AI risk as supremely important, but my impression was that this position was disproportionately common in the (then) Centre for Effective Altruism, and there was occasional chatter on Facebook (circa 2014?) that some people there saw the global poverty cause as a way to funnel people towards AI risk.
I think the AI-risk faction started to assert itself more strongly in EA from about 2015, successfully persuading other major leader figures one by one over the following years (e.g. Holden in 2016, as linked to by Carl). But by then I wasn’t following EA closely, and I don’t have a good sense of the timeline.