Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.
Also, I think we should be clear about what kinds of serious harms would in principle be justified on a rights-based (or contractualist) view. Harming people who are innocent or not threats seems likely to violate rights and be impermissible on rights-based (and contractualist) views. This seems likely to apply to massive global surveillance and bombing civilian-populated regions, unless you can argue on such views that each person being surveilled or bombed is sufficiently a threat and harming innocent threats is permissible, or that collateral damage to innocent non-threats is permissible. I would guess statistical arguments about the probability of a random person being a threat are based on interpretations of these views that the people holding them would reject, or that the probability for each person being a threat would be too low to justify the harm to that person.
So, what kinds of objectionable harms could be justified on such views? I don’t think most people would qualify as serious enough threats to justify harm to them to protect others, especially people in the far future.
This seems like a fruitful area of research—I would like to see more exploration of this topic. I don’t think I have anything interesting to say off the top of my head.