Within the people actually working on existential risk/far future, my impression is that this ‘competition’ mindset doesn’t exist to nearly the same extent (I imagine the same is true in the ‘evidence’ causes, to borrow your framing). And so it’s a little alarming at least to me, to see competitive camps drawing up in the broader EA community, and to hear (for example) reports of people who value xrisk research ‘dismissing’ global poverty work.
Toby Ord, for example, is heavily involved in both global poverty/disease and far future work with FHI. In my own case, I spread my bets by working on existential risk but my donations (other than unclaimed expenses) go to AMF and SCI. This is because I have a lot of uncertainty on the matter, and frankly I think it’s unrealistic not to have a lot of uncertainty on it. I think this line (” There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions.”) more accurately sums up the views of most researchers I know working on existential risk.
You’re quite right that there are people like Toby (and clearly yourself) who are genuinely and deeply concerned by causes like global poverty while also working on very different causes like x-risk, and are not dismissive of either. The approach you describe seems very sensible, and it would be great to keep (or make?) room for it in the EA ethos. If people felt that EA committed them to open battle until the one best cause emerged victorious atop a pile of bones… well, that could cause problems. One thing which would help avoid it (and might be a worthwhile thing to do overall) would be to work out and establish a set of norms for potentially divisive or dismissive discussions of different EA causes.
That said, I am uncertain as to whether the different parts of EA will naturally separate, and whether this would be good or bad. I’m inclined to think that it would be bad, partly because right now everyone benefits from the greater chance at critical mass that we can achieve together, and partly because broad EA makes for a more intellectual interesting movement and this helps draw people in. But I can see the advantages of a robustly evidenced, empiricist, GiveWell/GWWC Classic-type movement. I’ve devoted a certain amount of time to that myself, including helping out Joey and Katherine Savoie’s endeavours along these lines at Charity Science.
This also seems like a good time to reiterate that I agree that “there should definitely be people in the world who think about existential risk”, that I don’t want to be dismissive of them either, and that my defending the more ‘empiricist’, poverty-focused part of EA doesn’t mean that I automatically subscribe to every x-risk sceptic attitude that you can find out there.
You’re quite right that there are people like Toby (and clearly yourself) who are genuinely and deeply concerned by causes like global poverty while also working on very different causes like x-risk, and are not dismissive of either. The approach you describe seems very sensible, and it would be great to keep (or make?) room for it in the EA ethos. If people felt that EA committed them to open battle until the one best cause emerged victorious atop a pile of bones… well, that could cause problems. One thing which would help avoid it (and might be a worthwhile thing to do overall) would be to work out and establish a set of norms for potentially divisive or dismissive discussions of different EA causes.
That said, I am uncertain as to whether the different parts of EA will naturally separate, and whether this would be good or bad. I’m inclined to think that it would be bad, partly because right now everyone benefits from the greater chance at critical mass that we can achieve together, and partly because broad EA makes for a more intellectual interesting movement and this helps draw people in. But I can see the advantages of a robustly evidenced, empiricist, GiveWell/GWWC Classic-type movement. I’ve devoted a certain amount of time to that myself, including helping out Joey and Katherine Savoie’s endeavours along these lines at Charity Science.
This also seems like a good time to reiterate that I agree that “there should definitely be people in the world who think about existential risk”, that I don’t want to be dismissive of them either, and that my defending the more ‘empiricist’, poverty-focused part of EA doesn’t mean that I automatically subscribe to every x-risk sceptic attitude that you can find out there.