I spoke with a number of individuals from across several of the main EA cause areas about how they think about and consider backfire risk in their work. Using these conversations, and my own reading on the Forum, and this extremely useful summary from Jim Buhler, I’ve defined four approaches: spotlighting [A], assigning precise probabilities [B], setting aside things you’re clueless about [C], and seeking ecologically inert interventions [D].
I am intentionally not endorsing any one of these approaches in this piece; my personal position is that they all have worrying flaws and I hope someone will come up with something better.
My preferred approach is prioritising decreasing uncertainty. I do not know about any interventions which robustly increase animal welfare due to potentially dominant uncertain effects on soil animals. So I would like to see more research on how to increase the welfare of soil animals. In contrast, approaches A and C practically ignore the effects on soil animals, B speculates about them, and D tries to minimise them.
If you weren’t doing [B] with moral weights, though, you would presumably have to worry about things other than effects on soil animals. So, ultimately, [B] remains an important crux for you.
(You could still say you’d prioritize decreasing uncertainty on moral weights if you thought there was too much uncertainty to justify doing [B], but the results from such research might never be precise enough to be action-guiding. You might have to endorse B despite the ambiguity, or one of the three others.)
I do worry about effects besides those on soil animals. I think effects on microorganisms may easily be much larger.
I also see lots of value in decreasing the uncertainty about how the (expectedhedonistic) welfare per unit time of different organisms and digital systems compares with that of humans. Past research on this, particularly Rethink Priorities’ (RP’s) moral weight project (MWP), has certainly been action-guiding in the sense of changing funding decisions. The past research has not yet led to interventions which I consider robustly increase welfare, but I do not think we should give up on finding these without trying way more.
Hi Mal.
My preferred approach is prioritising decreasing uncertainty. I do not know about any interventions which robustly increase animal welfare due to potentially dominant uncertain effects on soil animals. So I would like to see more research on how to increase the welfare of soil animals. In contrast, approaches A and C practically ignore the effects on soil animals, B speculates about them, and D tries to minimise them.
If you weren’t doing [B] with moral weights, though, you would presumably have to worry about things other than effects on soil animals. So, ultimately, [B] remains an important crux for you.
(You could still say you’d prioritize decreasing uncertainty on moral weights if you thought there was too much uncertainty to justify doing [B], but the results from such research might never be precise enough to be action-guiding. You might have to endorse B despite the ambiguity, or one of the three others.)
Hi Jim,
I do worry about effects besides those on soil animals. I think effects on microorganisms may easily be much larger.
I also see lots of value in decreasing the uncertainty about how the (expected hedonistic) welfare per unit time of different organisms and digital systems compares with that of humans. Past research on this, particularly Rethink Priorities’ (RP’s) moral weight project (MWP), has certainly been action-guiding in the sense of changing funding decisions. The past research has not yet led to interventions which I consider robustly increase welfare, but I do not think we should give up on finding these without trying way more.