Whether this is right or wrong—and Ryan is certainly correct that Dylan Matthews’ piece didn’t offer a knock-down argument against focusing on AI risk, which I doubt it was intended to do—it’s worth noting that the article in question wasn’t only about this issue. It focused primarily on Matthews’ worries about the EA movement as a whole, following EA Global San Francisco. These included a lack of diversity, the risks of the focus on meta-charities and movement building (which can of course be valuable, but can also lead to self-servingness and self-congratulation), and the attitude of some of those focused on x-risks to people focused on global poverty. On this last, here was my comment from Facebook:
Global poverty genuinely is increasingly marginalised and dismissed as an EA cause. Some people here may be being misled by the fact that poverty is frequently placed front and centre in the rhetoric. But the rhetoric is often just that, explicitly intended as a device to recruit new batches of EAs who can then be directed towards supporting x-risk or meta causes. Sometimes this strategy is public, and that may indeed be unusually common in the Bay Area, but it’s widespread among people and organisations elsewhere. (As a matter of fixed policy I’m not going to pick them out because that’d be counterproductive, and probably the wrong thing to do!)
I’ve heard many people express this perspective. To take one example, Sasha Cooper noted on the EA Forum that “those committed to poverty [...] often seem to be looked on as incomplete or fledgling EAs” and in http://on.fb.me/1gYKJN6 that there’s also a related but distinct disagreement between “quantifiers” and “speculators” (with “quanitifiers” often but not always supporting global poverty charities), which is fairly open and occasionally hostile. I perceive the hostility/dismissiveness as mainly coming from supporters of speculative causes, but I’m sure it sometimes goes the other way.
Disclaimer 1: I unfortunately couldn’t attend EA Global SF because my visa makes it impractical to leave Canada for a while, so I don’t know first hand what the tilt of the conference was. I heard it was very x-risk and meta heavy, but that was only second hand.
Disclaimer 2: I obviously say this as someone who leans towards more quantifiable and less speculative approaches, and thinks that global poverty is (probably) the best ultimate cause to donate to. But I intellectually respect many other EA cause areas (where ‘respect’ means something genuine and meaningful, rather than something automatically handed out to anyone’s view).
Whether this is right or wrong—and Ryan is certainly correct that Dylan Matthews’ piece didn’t offer a knock-down argument against focusing on AI risk, which I doubt it was intended to do—it’s worth noting that the article in question wasn’t only about this issue. It focused primarily on Matthews’ worries about the EA movement as a whole, following EA Global San Francisco. These included a lack of diversity, the risks of the focus on meta-charities and movement building (which can of course be valuable, but can also lead to self-servingness and self-congratulation), and the attitude of some of those focused on x-risks to people focused on global poverty. On this last, here was my comment from Facebook: