It’s true that all data and algorithms are biased in some way. But I suppose the question is, is the bias from this less than what you get from human experts, who often have a pay cheque that might lead them to think in a certain way.
I’d imagine that any system would not be trusted implicitly, to start with, but would have to build up a reputation of providing useful predictions.
In terms of implementation, I’m imagining people building complex models of the world, like decision making under deep uncertainty with the AI mainly providing a user friendly interface to ask questions about the model.
At best I think it would likely be around the same bias as humans, but also potentially much worse. For paycheque influences on human experts, the AI would likely lean the same way as its developer as they tend to heavily maintain developer bias (as the developer is the one measuring success, largely by their own metrics) so there’s not much of a difference there in my opinion.
I’m not saying the idea is bad, but I’m not sure it provides anything useful to negate its significant downside resource and risk cost except when used as a data collation tool for human experts. You can use built trust, neutrality vetting, and careful implementation with humans too.
That said, I’m just one person. A stranger on the internet. There might be people working on this who significantly disagree with me on this.
It’s true that all data and algorithms are biased in some way. But I suppose the question is, is the bias from this less than what you get from human experts, who often have a pay cheque that might lead them to think in a certain way.
I’d imagine that any system would not be trusted implicitly, to start with, but would have to build up a reputation of providing useful predictions.
In terms of implementation, I’m imagining people building complex models of the world, like decision making under deep uncertainty with the AI mainly providing a user friendly interface to ask questions about the model.
At best I think it would likely be around the same bias as humans, but also potentially much worse. For paycheque influences on human experts, the AI would likely lean the same way as its developer as they tend to heavily maintain developer bias (as the developer is the one measuring success, largely by their own metrics) so there’s not much of a difference there in my opinion.
I’m not saying the idea is bad, but I’m not sure it provides anything useful to negate its significant downside resource and risk cost except when used as a data collation tool for human experts. You can use built trust, neutrality vetting, and careful implementation with humans too.
That said, I’m just one person. A stranger on the internet. There might be people working on this who significantly disagree with me on this.