1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as “a racist systsem”
2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing
3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight
4. Non-white people tend to evaluate racist systems very negatively, even when they improve predictive accuracy
To me, the rational conclusion is to not support racist systems, such as the use of this predictive algorithm.
It seems like many EAs disagree, which is why I’ve tried to break down my thinking to identify specific points of disagreement. Maybe people believe that #4 is false? I’m not sure where to find hard data to prove it (custom Google survey maybe?). I’m ~90% sure it’s true, and would be willing to bet money on it, but if others’ credences are lower that might explain the disagreement.
Edit: Maybe an implicit difference is epistemic modesty regarding moral theories—you could frame my argument in terms of “white people misestimating the negative utility of racial discrimination”, but I think it’s also possible for demographic characteristics to bias one’s beliefs about morality. There’s no a priori reason to expect your demographic group to have more moral insight than others; one obvious example is the correlation between gender and support for utilitarianism. I don’t see any reason why men would have more moral insight, so as a man I might want to reduce my credence in utilitarianism to correct for this bias.
Similarly, I expect the disagreement between a white EA who likes race-based sentencing and a random black person who doesn’t to be a combination of disagreement about facts (e.g. the level of harm caused by racism) and moral beliefs (e.g. importance of fairness). However, *both* disagreements could stem from bias on the EA’s part, and so I think the EA ought not discount the random guy’s point of view by assigning 0 probability to the chance that fairness is morally important.
1. A system that will imprison a black person but not an otherwise-identical white person can be accurately described as “a racist systsem”
2. One example of such a system is employing a ML algorithm that uses race as a predictive factor to determine bond amounts and sentencing
3. White people will tend to be biased towards more positive evaluations of a racist system because they have not experienced racism, so their evaluations should be given lower weight
4. Non-white people tend to evaluate racist systems very negatively, even when they improve predictive accuracy
To me, the rational conclusion is to not support racist systems, such as the use of this predictive algorithm.
It seems like many EAs disagree, which is why I’ve tried to break down my thinking to identify specific points of disagreement. Maybe people believe that #4 is false? I’m not sure where to find hard data to prove it (custom Google survey maybe?). I’m ~90% sure it’s true, and would be willing to bet money on it, but if others’ credences are lower that might explain the disagreement.
Edit: Maybe an implicit difference is epistemic modesty regarding moral theories—you could frame my argument in terms of “white people misestimating the negative utility of racial discrimination”, but I think it’s also possible for demographic characteristics to bias one’s beliefs about morality. There’s no a priori reason to expect your demographic group to have more moral insight than others; one obvious example is the correlation between gender and support for utilitarianism. I don’t see any reason why men would have more moral insight, so as a man I might want to reduce my credence in utilitarianism to correct for this bias.
Similarly, I expect the disagreement between a white EA who likes race-based sentencing and a random black person who doesn’t to be a combination of disagreement about facts (e.g. the level of harm caused by racism) and moral beliefs (e.g. importance of fairness). However, *both* disagreements could stem from bias on the EA’s part, and so I think the EA ought not discount the random guy’s point of view by assigning 0 probability to the chance that fairness is morally important.