It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.
Clear benefits, diffuse harms
It is worth noting when systems introduce benefits in a few obvious ways but many small harms. An example is blocking housing. It benefits the neighbours a lot—they don’t have to have construction nearby—and the people who are harmed are just random marginal people who could have afforded a home but just can’t.
But these harms are real and should be tallied.
Much recent discussion in EA has suggested common sense risk reduction strategies which would stop clear bad behavior. Often we all agree on the clear bad behaviour.
But the risk reduction strategies would also often set norms against a range of greyer behaviour that the suggestors don’t engage in or doesn’t seem valuable to them. If you don’t work with your coworkers, then suggesting it be normed against, seems fine—it would make it hard for people to end up in weird living situations. But I know people who have loved living with coworkers. That’s a diffuse harm.
Mainly I think this involves acknowledging people are a lot weirder than you think. People want things I don’t expect them to want, they consent in business, housing and relationships to things I’d never expect them to. People are wild. And I think it’s worth there being bright lines against some kinds of behaviour that is bad or nearly always bad—I’d suggest dating your reports is ~ very unwise—but a lot is about human preferences and to understand that we need to elicit both wholesome and illicit preferences or consider harms that are diffuse.
Note that I’m not saying which way the balance of harms falls, but that both types should be counted.