Thanks for this very interesting and clearly articulated post. A comment specifically on the “camps” thing.
Within the people actually working on existential risk/far future, my impression is that this ‘competition’ mindset doesn’t exist to nearly the same extent (I imagine the same is true in the ‘evidence’ causes, to borrow your framing). And so it’s a little alarming at least to me, to see competitive camps drawing up in the broader EA community, and to hear (for example) reports of people who value xrisk research ‘dismissing’ global poverty work.
Toby Ord, for example, is heavily involved in both global poverty/disease and far future work with FHI. In my own case, I spread my bets by working on existential risk but my donations (other than unclaimed expenses) go to AMF and SCI. This is because I have a lot of uncertainty on the matter, and frankly I think it’s unrealistic not to have a lot of uncertainty on it. I think this line (” There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions.”) more accurately sums up the views of most researchers I know working on existential risk.
I realise that this might be seen as going against the EA ‘ethos’ to a certain extent—a lot of the aim is to be able to rank things clearly and objectively, and choose the best causes. But this gets very difficult when you start to include the speculative causes. It’s the nature of existential risk research to be wrong a lot of time—a lot of work surrounds high impact, low probability risks that may not come to pass, many of the interventions may not have effect until much further in the future, it is hard to predict whether it’s our work which makes the crucial difference, etc—all of this makes it difficult to measure.
I’m happy to say existential risk (and global catastrophic risk) are important areas of work. I think there are strong, evidence-based arguments that it has been under-served and underfunded globally to date, for reasons well-articulated elsewhere. I think there are also strong arguments that e.g. global poverty is under-served and underfunded for a set of reasons. I’m happy to say I consider these both to be great causes, with strong reasons to fund them. But reducing down “donate to AMF vs donate to CSER” into e.g. lives saved in the present versus speculative lives saved in the future involves so much gross simplification and assumptions that could be wrong by so many orders of magnitude that I’m not comfortable doing it. Add to this moral uncertainty over value of present lives versus value of speculative future lives, value of animal lives, and so on, and it gets even more difficult.
I don’t know how to resolve this fully within the EA framing. My personal ‘dodge’ has been to prioritise raising funds from non-EA sources for FHI and CSER (>95%, if one excludes Musk, >80% if one includes him). I would be a hypocrite to recommend to someone to stop funding AMF in favour of CSER, given that I’m not doing that myself. But I do appreciate that an EA still has to decide what to do with her funds between xrisk, global poverty, animal altruism, and other causes. I think we will learn from continuing excellent work by ‘meta’ groups like GiveWell/OPP and others. But to a certain extent, I think we will have to recognise, and respect, that at some point there are moral and empirical uncertainties that are hard to reduce away.
Perhaps for now the best we can say is “There are a number of very good causes that are globally under-served. There are significant uncertainties that make it difficult to ‘rank’ between them, and it will partly depend on a person’s moral beliefs and appetite for ‘long shots’ vs ‘safe bets’, as well as near-term opportunities for making a clear difference in a particular area. But we can agree that there are solid reasons to support this set of causes over others”.
Within the people actually working on existential risk/far future, my impression is that this ‘competition’ mindset doesn’t exist to nearly the same extent (I imagine the same is true in the ‘evidence’ causes, to borrow your framing). And so it’s a little alarming at least to me, to see competitive camps drawing up in the broader EA community, and to hear (for example) reports of people who value xrisk research ‘dismissing’ global poverty work.
Toby Ord, for example, is heavily involved in both global poverty/disease and far future work with FHI. In my own case, I spread my bets by working on existential risk but my donations (other than unclaimed expenses) go to AMF and SCI. This is because I have a lot of uncertainty on the matter, and frankly I think it’s unrealistic not to have a lot of uncertainty on it. I think this line (” There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions.”) more accurately sums up the views of most researchers I know working on existential risk.
You’re quite right that there are people like Toby (and clearly yourself) who are genuinely and deeply concerned by causes like global poverty while also working on very different causes like x-risk, and are not dismissive of either. The approach you describe seems very sensible, and it would be great to keep (or make?) room for it in the EA ethos. If people felt that EA committed them to open battle until the one best cause emerged victorious atop a pile of bones… well, that could cause problems. One thing which would help avoid it (and might be a worthwhile thing to do overall) would be to work out and establish a set of norms for potentially divisive or dismissive discussions of different EA causes.
That said, I am uncertain as to whether the different parts of EA will naturally separate, and whether this would be good or bad. I’m inclined to think that it would be bad, partly because right now everyone benefits from the greater chance at critical mass that we can achieve together, and partly because broad EA makes for a more intellectual interesting movement and this helps draw people in. But I can see the advantages of a robustly evidenced, empiricist, GiveWell/GWWC Classic-type movement. I’ve devoted a certain amount of time to that myself, including helping out Joey and Katherine Savoie’s endeavours along these lines at Charity Science.
This also seems like a good time to reiterate that I agree that “there should definitely be people in the world who think about existential risk”, that I don’t want to be dismissive of them either, and that my defending the more ‘empiricist’, poverty-focused part of EA doesn’t mean that I automatically subscribe to every x-risk sceptic attitude that you can find out there.
Thanks for this very interesting and clearly articulated post. A comment specifically on the “camps” thing.
Within the people actually working on existential risk/far future, my impression is that this ‘competition’ mindset doesn’t exist to nearly the same extent (I imagine the same is true in the ‘evidence’ causes, to borrow your framing). And so it’s a little alarming at least to me, to see competitive camps drawing up in the broader EA community, and to hear (for example) reports of people who value xrisk research ‘dismissing’ global poverty work.
Toby Ord, for example, is heavily involved in both global poverty/disease and far future work with FHI. In my own case, I spread my bets by working on existential risk but my donations (other than unclaimed expenses) go to AMF and SCI. This is because I have a lot of uncertainty on the matter, and frankly I think it’s unrealistic not to have a lot of uncertainty on it. I think this line (” There should definitely be people in the world who think about existential risk and there should definitely be people in the world providing evidence on the effectiveness of charitable interventions.”) more accurately sums up the views of most researchers I know working on existential risk.
I realise that this might be seen as going against the EA ‘ethos’ to a certain extent—a lot of the aim is to be able to rank things clearly and objectively, and choose the best causes. But this gets very difficult when you start to include the speculative causes. It’s the nature of existential risk research to be wrong a lot of time—a lot of work surrounds high impact, low probability risks that may not come to pass, many of the interventions may not have effect until much further in the future, it is hard to predict whether it’s our work which makes the crucial difference, etc—all of this makes it difficult to measure.
I’m happy to say existential risk (and global catastrophic risk) are important areas of work. I think there are strong, evidence-based arguments that it has been under-served and underfunded globally to date, for reasons well-articulated elsewhere. I think there are also strong arguments that e.g. global poverty is under-served and underfunded for a set of reasons. I’m happy to say I consider these both to be great causes, with strong reasons to fund them. But reducing down “donate to AMF vs donate to CSER” into e.g. lives saved in the present versus speculative lives saved in the future involves so much gross simplification and assumptions that could be wrong by so many orders of magnitude that I’m not comfortable doing it. Add to this moral uncertainty over value of present lives versus value of speculative future lives, value of animal lives, and so on, and it gets even more difficult.
I don’t know how to resolve this fully within the EA framing. My personal ‘dodge’ has been to prioritise raising funds from non-EA sources for FHI and CSER (>95%, if one excludes Musk, >80% if one includes him). I would be a hypocrite to recommend to someone to stop funding AMF in favour of CSER, given that I’m not doing that myself. But I do appreciate that an EA still has to decide what to do with her funds between xrisk, global poverty, animal altruism, and other causes. I think we will learn from continuing excellent work by ‘meta’ groups like GiveWell/OPP and others. But to a certain extent, I think we will have to recognise, and respect, that at some point there are moral and empirical uncertainties that are hard to reduce away.
Perhaps for now the best we can say is “There are a number of very good causes that are globally under-served. There are significant uncertainties that make it difficult to ‘rank’ between them, and it will partly depend on a person’s moral beliefs and appetite for ‘long shots’ vs ‘safe bets’, as well as near-term opportunities for making a clear difference in a particular area. But we can agree that there are solid reasons to support this set of causes over others”.
You’re quite right that there are people like Toby (and clearly yourself) who are genuinely and deeply concerned by causes like global poverty while also working on very different causes like x-risk, and are not dismissive of either. The approach you describe seems very sensible, and it would be great to keep (or make?) room for it in the EA ethos. If people felt that EA committed them to open battle until the one best cause emerged victorious atop a pile of bones… well, that could cause problems. One thing which would help avoid it (and might be a worthwhile thing to do overall) would be to work out and establish a set of norms for potentially divisive or dismissive discussions of different EA causes.
That said, I am uncertain as to whether the different parts of EA will naturally separate, and whether this would be good or bad. I’m inclined to think that it would be bad, partly because right now everyone benefits from the greater chance at critical mass that we can achieve together, and partly because broad EA makes for a more intellectual interesting movement and this helps draw people in. But I can see the advantages of a robustly evidenced, empiricist, GiveWell/GWWC Classic-type movement. I’ve devoted a certain amount of time to that myself, including helping out Joey and Katherine Savoie’s endeavours along these lines at Charity Science.
This also seems like a good time to reiterate that I agree that “there should definitely be people in the world who think about existential risk”, that I don’t want to be dismissive of them either, and that my defending the more ‘empiricist’, poverty-focused part of EA doesn’t mean that I automatically subscribe to every x-risk sceptic attitude that you can find out there.