I have no problem with AI/machine learning being used in areas where the black box nature does not matter very much, and the consequences of hallucinations or bias are small.
My problem is with the idea of “superhuman governance”, where unaccountable black box machines make decisions that affect peoples lives significantly for reasons that cannot be dissected and explained.
Far from preventing corruption, I think this is a gift wrapped opportunity for the corrupt to hide their corruption behind the veneer of a “fair algorithm”. I don’t think it would be particularly hard to train a neural network to appear to be neutral while actually subtly favoring one outcome or the other, by manipulating the training data or the reward function. There would be no way to tell this manipulation occurred in the code, because the actual “code” of a neural network involves multiplying ginormous matrices of inscrutable numbers.
Of course, the more likely outcome is that this just happens by accident, and whatever biases and quirks occurred by accident due to inherently non-random data sampling get baked into the decisions affecting everybody.
Human decision making is spread over many, many people, so the impact of any one person being flawed is minimized. Taking humans out of the equation reduces the number of points of failure significantly.
I have no problem with AI/machine learning being used in areas where the black box nature does not matter very much, and the consequences of hallucinations or bias are small.
Sure, if this is the case, then it’s not clear to me if/where we disagree.
I’d agree that it’s definitely possible to use these systems in poor ways, using all the downsides that you describe and more.
My main point is that I think we can also point to some worlds and some solutions that are quite positive in these cases. I’d also expect that there’s a lot of good that could be done by improving the positive AI/governance workflows, even if there also other bad AI/governance workflows in the world.
This is similar to how I think some technology is quite bad, but there’s also a lot of great technology—and often the solution to bad tech is good tech, not no tech.
I have no problem with AI/machine learning being used in areas where the black box nature does not matter very much, and the consequences of hallucinations or bias are small.
My problem is with the idea of “superhuman governance”, where unaccountable black box machines make decisions that affect peoples lives significantly for reasons that cannot be dissected and explained.
Far from preventing corruption, I think this is a gift wrapped opportunity for the corrupt to hide their corruption behind the veneer of a “fair algorithm”. I don’t think it would be particularly hard to train a neural network to appear to be neutral while actually subtly favoring one outcome or the other, by manipulating the training data or the reward function. There would be no way to tell this manipulation occurred in the code, because the actual “code” of a neural network involves multiplying ginormous matrices of inscrutable numbers.
Of course, the more likely outcome is that this just happens by accident, and whatever biases and quirks occurred by accident due to inherently non-random data sampling get baked into the decisions affecting everybody.
Human decision making is spread over many, many people, so the impact of any one person being flawed is minimized. Taking humans out of the equation reduces the number of points of failure significantly.
Sure, if this is the case, then it’s not clear to me if/where we disagree.
I’d agree that it’s definitely possible to use these systems in poor ways, using all the downsides that you describe and more.
My main point is that I think we can also point to some worlds and some solutions that are quite positive in these cases. I’d also expect that there’s a lot of good that could be done by improving the positive AI/governance workflows, even if there also other bad AI/governance workflows in the world.
This is similar to how I think some technology is quite bad, but there’s also a lot of great technology—and often the solution to bad tech is good tech, not no tech.