If you want literal interpretations, specificity, and explicitness, I think you’re in for a bad time:
“Any particular person dies at least x earlier with probability > p than they would have by inaction”
How do you intend to define “person” in terms of the inputs to an AI system (let’s assume a camera image)? How do you compute the “probability” of an event? What is “inaction”?
(There’s also the problem that all actions probably change who does and doesn’t exists, so this law would require the AI system to always take inaction, making it useless.)
How do you intend to define “person” in terms of the inputs to an AI system (let’s assume a camera image)?
Can we just define them as we normally do, e.g. biologically with a functioning brain? Is the concern that AIs won’t be able to tell which inputs represent real things from those that won’t? Or they just won’t be able to apply the definitions correctly generally enough?
How do you compute the “probability” of an event?
The AI would do this. Are AIs that aren’t good at estimating probabilities of events smart enough to worry about? I suppose they could be good at estimating probabilities in specific domains but not generally, or have some very specific failure cases that could be catastrophic.
What is “inaction”?
The AI waits for the next request, turns off or some other inconsequential default action.
(There’s also the problem that all actions probably change who does and doesn’t exists, so this law would require the AI system to always take inaction, making it useless.)
Maybe my wording didn’t capture this well, but my intention was a presentist/necessitarian person-affecting approach (not that I agree with the ethical position). I’ll try again:
“A particular person will have been born with action A and with inaction, and will die at least x earlier with probability > p with A than they would have with inaction.”
Can we just define them as we normally do, e.g. biologically with a functioning brain?
How do you define “biological” and “brain”? Again, your input is a camera image, so you have to build this up starting from sentences of the form “the pixel in the top left corner is this shade of grey”.
(Or you can choose some other input, as long as we actually have existing technology that can create that input.)
The AI would do this. Are AIs that aren’t good at estimating probabilities of events smart enough to worry about?
Powerful AIs will certainly behave in ways that make it look like they are estimating probabilities.
Let’s take AIs trained by deep reinforcement learning as an example. If you want to encode something like “Any particular person dies at least x earlier with probability > p than they would have by inaction” explicitly and literally in code, you will need functions like getAllPeople() and getProbability(event). AIs do not usually come equipped with such functions, so you either have to say how to use the AI system to implement those functions, or you have to implement them yourself. I am claiming that the second option is hard, and any solution you have for the first option will probably also work for something like telling the AI system to “do what the user wants”.
The AI waits for the next request, turns off or some other inconsequential default action.
If you’re a self-driving car, it’s very unclear what an inconsequential default action is. (Though I agree in general there’s often some default action that is fine.)
Maybe my wording didn’t capture this well, but my intention was a presentist/necessitarian person-affecting approach (not that I agree with the ethical position).
I mean, the existence part was not the main point—my point was that if butterfly effects are real, then the AI system must always do nothing (even if it can’t predict what the butterfly effects would be). If you want to avoid debates about population ethics, you could imagine butterfly effects that affect current people: e.g. you slightly change who talks to whom, which changes whether a person gets hit by a car later in the day or not.
I’m not arguing that these sorts of butterfly effects are real—I’m not sure—but it seems bad for the behavior of our AI system to depend so strongly on whether butterfly effects are real.
Maybe this cuts to the chase: Should we expect AIs to be able to know or do anything in particular well “enough”. I.e. is there one thing in particular we can say AIs will be good at and only get wrong extremely rarely? Is solving this as hard as technical AI alignment in general?
How do you define “biological” and “brain”? Again, your input is a camera image, so you have to build this up starting from sentences of the form “the pixel in the top left corner is this shade of grey”.
These are things it would be trained to learn. It would learn to read and could read biology textbooks and papers or things online, and it would also see pictures of people, brains, etc..
AIs do not usually come equipped with such functions, so you either have to say how to use the AI system to implement those functions, or you have to implement them yourself.
This could be an explicit output we train the AI to predict (possibly part of responses in language).
I mean, the existence part was not the main point—my point was that if butterfly effects are real, then the AI system must always do nothing (even if it can’t predict what the butterfly effects would be). If you want to avoid debates about population ethics, you could imagine butterfly effects that affect current people: e.g. you slightly change who talks to whom, which changes whether a person gets hit by a car later in the day or not.
I “named” a particular person in that sentence. The probability that what I do leads to an earlier death for John Doe is extremely small, and that’s the probability that I’m constraining, for each person separately. This will also in practice prevent the AI from conducting murder lotteries up to a certain probability of being killed, but this probability might be too high, so you could have separate constraints for causing an earlier death for a random person or on the change in average life expectancy in the world to prevent, etc..
These are things it would be trained to learn. It would learn to read and could read biology textbooks and papers or things online, and it would also see pictures of people, brains, etc..
It really sounds like this sort of training is going to require it to be able to interpret English the way we interpret English (e.g. to read biology textbooks); if you’re going to rely on that I don’t see why you don’t want to rely on that ability when we are giving it instructions.
This could be an explicit output we train the AI to predict (possibly part of responses in language).
That… is ambitious, if you want to do this for every term that exists in laws. But I agree that if you did this, you could try to “translate” laws into code in a literal fashion. I’m fairly confident that this would still be pretty far from what you wanted, because laws aren’t meant to be literal, but I’m not going to try to argue that here.
(Also, it probably wouldn’t be computationally efficient—that “don’t kill a person” law, to be implemented literally in code, would require you to loop over all people, and make a prediction for each one: extremely expensive.)
I “named” a particular person in that sentence.
Ah, I see. In that case I take back my objection about butterfly effects.
If you want literal interpretations, specificity, and explicitness, I think you’re in for a bad time:
How do you intend to define “person” in terms of the inputs to an AI system (let’s assume a camera image)? How do you compute the “probability” of an event? What is “inaction”?
(There’s also the problem that all actions probably change who does and doesn’t exists, so this law would require the AI system to always take inaction, making it useless.)
Can we just define them as we normally do, e.g. biologically with a functioning brain? Is the concern that AIs won’t be able to tell which inputs represent real things from those that won’t? Or they just won’t be able to apply the definitions correctly generally enough?
The AI would do this. Are AIs that aren’t good at estimating probabilities of events smart enough to worry about? I suppose they could be good at estimating probabilities in specific domains but not generally, or have some very specific failure cases that could be catastrophic.
The AI waits for the next request, turns off or some other inconsequential default action.
Maybe my wording didn’t capture this well, but my intention was a presentist/necessitarian person-affecting approach (not that I agree with the ethical position). I’ll try again:
“A particular person will have been born with action A and with inaction, and will die at least x earlier with probability > p with A than they would have with inaction.”
How do you define “biological” and “brain”? Again, your input is a camera image, so you have to build this up starting from sentences of the form “the pixel in the top left corner is this shade of grey”.
(Or you can choose some other input, as long as we actually have existing technology that can create that input.)
Powerful AIs will certainly behave in ways that make it look like they are estimating probabilities.
Let’s take AIs trained by deep reinforcement learning as an example. If you want to encode something like “Any particular person dies at least x earlier with probability > p than they would have by inaction” explicitly and literally in code, you will need functions like getAllPeople() and getProbability(event). AIs do not usually come equipped with such functions, so you either have to say how to use the AI system to implement those functions, or you have to implement them yourself. I am claiming that the second option is hard, and any solution you have for the first option will probably also work for something like telling the AI system to “do what the user wants”.
If you’re a self-driving car, it’s very unclear what an inconsequential default action is. (Though I agree in general there’s often some default action that is fine.)
I mean, the existence part was not the main point—my point was that if butterfly effects are real, then the AI system must always do nothing (even if it can’t predict what the butterfly effects would be). If you want to avoid debates about population ethics, you could imagine butterfly effects that affect current people: e.g. you slightly change who talks to whom, which changes whether a person gets hit by a car later in the day or not.
I’m not arguing that these sorts of butterfly effects are real—I’m not sure—but it seems bad for the behavior of our AI system to depend so strongly on whether butterfly effects are real.
Maybe this cuts to the chase: Should we expect AIs to be able to know or do anything in particular well “enough”. I.e. is there one thing in particular we can say AIs will be good at and only get wrong extremely rarely? Is solving this as hard as technical AI alignment in general?
These are things it would be trained to learn. It would learn to read and could read biology textbooks and papers or things online, and it would also see pictures of people, brains, etc..
This could be an explicit output we train the AI to predict (possibly part of responses in language).
I “named” a particular person in that sentence. The probability that what I do leads to an earlier death for John Doe is extremely small, and that’s the probability that I’m constraining, for each person separately. This will also in practice prevent the AI from conducting murder lotteries up to a certain probability of being killed, but this probability might be too high, so you could have separate constraints for causing an earlier death for a random person or on the change in average life expectancy in the world to prevent, etc..
It really sounds like this sort of training is going to require it to be able to interpret English the way we interpret English (e.g. to read biology textbooks); if you’re going to rely on that I don’t see why you don’t want to rely on that ability when we are giving it instructions.
That… is ambitious, if you want to do this for every term that exists in laws. But I agree that if you did this, you could try to “translate” laws into code in a literal fashion. I’m fairly confident that this would still be pretty far from what you wanted, because laws aren’t meant to be literal, but I’m not going to try to argue that here.
(Also, it probably wouldn’t be computationally efficient—that “don’t kill a person” law, to be implemented literally in code, would require you to loop over all people, and make a prediction for each one: extremely expensive.)
Ah, I see. In that case I take back my objection about butterfly effects.