Another neglected way out is to precisify our notion of causality used in DDA (and in ordinary language) so as to include conceptions of explanation and credit attribution, thus exempting liability for random effects. MacAskill and Mogensen come close to contemplating this point in section 3.3, but then they focus on the Arms Trader example, which is close to a strawman here, and conclude:
We grant that it sometimes sounds wrong to say that you do harm to another when you initiate a causal sequence that ends with that person being harmed through the voluntary behavior of some other agent. But so far as we can see, this is entirely explained in terms of pragmatic factors like those discussed earlier: that is, in terms of conversational implicatures that typically attach to locutions associated with the ‘doing’ side of the doing/allowing distinction.
The problem with the voluntary behavior of others is not that it would necessarily exempt you of responsibility, but that it would often make your action causally irrelevant. We can say “Agent X’s action a caused event e” is ambiguous between:
X’s action belongs to the causal chain that led to e, and
in addition to (i), a increased the probability of e happening.
(i) is not a very useful notion of causality – basically every state of the world causes the next states (in the corresponding lightcone), because every event has repercussions.
Thus when we say that carbon emissions (caused climate change) caused floods in Lisbon in the last few days, we are not stating the obvious fact that, because of the chaotic nature of long-term climate trends, any different world history would have implied distinct rain patterns. We are rather saying that carbon emissions (and global warming) made such extreme events more likely. Also, this is not straightforwardly connected to predictability, as something might be hard to predict, but easy to explain in hindsight.
It’s kind of intuitive that we normally use a more refined notion of causality in practical reason; so, though we might blame an arms trader, we don’t even consider blaming all supply chains that made some murders possible. Thus, when we say that all of my actions will cause the identity of some future people, we are talking about (i). But the relevant notion of causality for DDA is (ii); in this sense, I may cause the identity of some future people by making some genetic pools more likely than their alternatives (for instance, by having kids, by working with fertilization, etc.). So, my mother’s school teacher didn’t cause my birth in anyway; my mother’s marrying my father did it, though.
Another neglected way out is to precisify our notion of causality used in DDA (and in ordinary language) so as to include conceptions of explanation and credit attribution, thus exempting liability for random effects. MacAskill and Mogensen come close to contemplating this point in section 3.3, but then they focus on the Arms Trader example, which is close to a strawman here, and conclude:
The problem with the voluntary behavior of others is not that it would necessarily exempt you of responsibility, but that it would often make your action causally irrelevant. We can say “Agent X’s action a caused event e” is ambiguous between:
X’s action belongs to the causal chain that led to e, and
in addition to (i), a increased the probability of e happening.
(i) is not a very useful notion of causality – basically every state of the world causes the next states (in the corresponding lightcone), because every event has repercussions.
Thus when we say that carbon emissions (caused climate change) caused floods in Lisbon in the last few days, we are not stating the obvious fact that, because of the chaotic nature of long-term climate trends, any different world history would have implied distinct rain patterns. We are rather saying that carbon emissions (and global warming) made such extreme events more likely. Also, this is not straightforwardly connected to predictability, as something might be hard to predict, but easy to explain in hindsight.
It’s kind of intuitive that we normally use a more refined notion of causality in practical reason; so, though we might blame an arms trader, we don’t even consider blaming all supply chains that made some murders possible. Thus, when we say that all of my actions will cause the identity of some future people, we are talking about (i). But the relevant notion of causality for DDA is (ii); in this sense, I may cause the identity of some future people by making some genetic pools more likely than their alternatives (for instance, by having kids, by working with fertilization, etc.). So, my mother’s school teacher didn’t cause my birth in anyway; my mother’s marrying my father did it, though.