Great! It’s not super important, but I’d be curious to know his own thoughts on the question of why pleasure and pain feel different and aren’t just a single dimension of motivation, given that you can shift all rewards up or down uniformly while keeping behavior unchanged. Here is one possible explanation, which mentions Daw et al. (2002).
I’d also be curious to know at what level of complexity / ability of artificial RL systems he would start to grant them ethical consideration.
I’ve had a look into Dayan’s suggestedpapers—they imply an interesting theory. I’ll put my thoughts here so the discussion can be public. The theory contradicts the one you link above where the separation between pain and pleasure is a contingency of how our brain works.
You’ve written about another (very intuitive) theory, where the zero-point is where you’d be indifferent between prolonging and ending your life:
“This explanation may sound plausible due to its analogy to familiar concepts, but it seems to place undue weight on whether an agent’s lifetime is fixed or variable. Yet I would still feel pain and pleasure as being distinct even if I knew exactly when I would die, and a simple RL agent has no concept of death to begin with.”
Dayan’s research suggests that the zero-point will also come up in many circumstances relating to opportunity costs which would deal with that objection. To simplify, let’s say the agent expects a fixed average rate of return rho for the foreseeable future. It is faced with a problem where it can either act fast (high energy expenditure) or act slowly (high opportunity costs as it won’t get the average return for a while). If rho is negative or zero, there is no need to act quickly at all because there are not opportunity costs. But the higher the opportunity costs get, the fast the agent will want to be at getting its average reward back so it will act quickly despite the immediate cost.
The speed with which the agent acts is called vigour in Dayan’s research. The agent’s vigour mathematically implies an average rate of return if the agent is rational. There can be other reasons for low vigour such as a task that requires patience—they have some experiments here in figure 1. In their experiment the optimal vigour (one over tao*) is proportional to the square root of the average return. A recent paper has confirmed the predictions of this model in humans.
So when is an agent happy according to this model?
The model would imply that the agent has positive welfare positive welfare when the agent treats it as creating positive opportunity costs while it’s doing other things (and vice versa for negative welfare). This would also apply to your example where the agent expends resources to increase or decrease its life-time.
What I like about this is that the welfare depends on the agent’s behaviour and not the way the rewards are internally processed and represented as numbers which is arbitrary.
I’m still not sure how you would go about calculating the welfare of an agent if you don’t have a nice experimental setup like Dayan’s. That might be amenable to more thinking. Moreover, all welfare is still relative and it doesn’t allow comparisons between agents.
Edit: I’m not sure though if there’s a problem because we now have to assume that the ‘inactive’ time where the agent doesn’t get its average reward is the zero-baseline which is also arbitrary.
Thanks!! Interesting. I haven’t read the linked papers, so let me know if I don’t understand properly (as I probably don’t).
I’ve always thought of simple RL agents as getting a reward at fixed time intervals no matter what they do, in which case they can’t act faster or slower. For example, if they skip pressing a lever, they just get a reward of 0 for that time step. Likewise, in an actual animal, the animal’s reward neurons don’t fire during the time when the lever isn’t being pressed, which is equivalent to a reward of 0.
Of course, animals would prefer to press the lever more often to get a positive reward rather than a reward of 0, but this would be true whether the lever gave positive reward or merely relief from punishment. For example, maybe the time between lever presses is painful, and the pressed lever is merely less painful. This could be the experience of, e.g., a person after a breakup consuming ice cream scoops at a higher rate than normal to escape her pain: even with the increased rate of ice cream intake, she may still have negative welfare, just less negative. It seems like vigor just says that what you’re doing is better than not doing it?
For really simple RL agents like those living in Grid World, there is no external clock. Time is sort of defined by when the agent takes its next step. So it’s again not clear if a “rate of actions” explanation can help here (but if it helps for more realistic RL agents, that’s cool!).
This answer says that for a Markov Decision Process, “each action taken is done in a time step.” So it seems like a time step is defined as the interval between one action and the next?
Thanks for the reply. I think I can clarify the issue about discrete time intervals. I’d be curious on your thoughts on the last sentence of my comment above if you have any.
Discrete time
So it seems like a time step is defined as the interval between one action and the next?
Yes. But in a SEMI or a https://en.wikipedia.org/wiki/Markov_decision_process#Continuous-time_Markov_Decision_Process Markov Decision Process (SMDP) this is not the case. SMDPs allow temporally extended actions and are commonly used in RL research. Dayan’s papers use a continuous SMDP. You can still have RL agents in this formalism and it tracks our situation more closely. But I don’t think the formalism matters for our discussion because you can arbitrarily approximate any formalism with a standard MDP—I’ll explain below.
The continuous-time experiment looks roughly like this: Imagine you’re in a room and you have to press a lever to get out—and get back to what you would normally be doing and get an average reward rho per second. However, the lever is hard to press. You can press it hard and fast or light and slowly, taking a total time T to complete the press. The total energy cost of pressing is 1/T so ideally you’d press very slowly but that would mean you couldn’t be outside the room during that time (opportunity costs).
In this setting, the ‘action’ is just the time T that you to press the lever. We can easily approximate this with a standard MDP. E.g. you could take action 1 which completely presses the lever in one time step, costing you 1/1=1 reward in energy. Or you could take action 2, which you would have to take twice to complete the press, costing you only 1⁄2 reward (so 1⁄4 for each time you take action 2). And so forth. Does that make sense?
Zero point
Of course, if you don’t like it outside the room at all, you’ll never press the lever—so there is a ‘zero point’ in terms of how much you like it outside. Below that point you’ll never press the lever.
It seems like vigor just says that what you’re doing is better than not doing it?
I’m not entirely sure what you mean, but I’ll clarify that acting vigorously doesn’t say anything about whether the agent is currently happy. It may well act vigorously just to escape punishment. Similarly, an agent that currently works to increase its life-time doesn’t necessarily feel good, but its work still implies that it thinks the additional life-time it gets will be good.
But I think your criticism may be the same as what I said in the edit above—that there is an unwarranted assumption that the agent is at the zero-point before it presses the lever. In the experiments this is assumed because there are no food rewards or shocks during that time. But you could still imaging that a depressed rat would feel bad anyway.
The theory that assumes nonexistence is the zero-point kind of does the same thing though. Although nonexistence is arguably a definite zero-point, the agent’s utility function might still go beyond its life-time...
acting vigorously doesn’t say anything about whether the agent is currently happy
Yeah, I guess I meant the trivial observation that you act vigorously if you judge that doing so has higher expected total discounted reward than not doing so. But this doesn’t speak to whether, after making that vigorous effort, your experiences will be net positive; they might just be less negative.
Of course, if you don’t like it outside the room at all, you’ll never press the lever—so there is a ‘zero point’ in terms of how much you like it outside.
...assuming that sticking around inside the room is neutral. This gets back to the “unwarranted assumption that the agent is at the zero-point before it presses the lever.”
The theory that assumes nonexistence is the zero-point kind of does the same thing though.
Hm. :) I feel like there’s a difference between (a) an agent inside the room who hasn’t yet pressed the lever to get out and (b) the agent not existing at all. For (a), it seems we ought to be able to give a (qualia and morally nonrealist) answer about whether its experiences are positive or negative or neutral, while for (b), such a question seems misplaced.
If it were a human in the room, we could ask that person whether her experiences before lever pressing were net positive or negative. I guess such answers could vary a lot between people based on various cultural, psychological, etc. factors unrelated to the activity level of reward networks. If so, perhaps one position could be that the distinction between positive vs. negative welfare is a pretty anthropomorphic concept that doesn’t travel well outside of a cognitive system capable of making these kinds of judgments. Intuitively, I feel like there is more to the sign of one’s welfare than these high-level, potentially idiosyncratic evaluations, but it’s hard to say what.
I suppose another approach could be to say that the person in the room definitely is at welfare 0 (by fiat) based on lack of reward or punishment signals, regardless of how the person evaluates her welfare verbally.
I feel like there’s a difference between (a) an agent inside the room who hasn’t yet pressed the lever to get out and (b) the agent not existing at all.
Yes that’s probably the right way to think about it. I’m also considering an alternative though: Since we’re describing the situation with a simple computational model we shouldn’t assume that there’s anything going on that isn’t captured by the model. E.g. if the agent in the room is depressed, it will be performing ‘mental actions’ - imagining depressing scenarios etc. But we may have to assume that away, similar to how high school physics would assume no friction etc.
So we’re left with an agent that decides initially that it won’t do anything at all (not even updating its beliefs) because it doesn’t want to be outside of the room and then remains inactive. The question arises if that’s an agent at all and if it’s meaningfully different unconsciousness.
So we’re left with an agent that decides initially that it won’t do anything at all (not even updating its beliefs) because it doesn’t want to be outside of the room and then remains inactive. The question arises if that’s an agent at all and if it’s meaningfully different unconsciousness.
Hm. :) Well, what if the agent did do stuff inside the room but still decided not to go out? We still wouldn’t be able to tell if it was experiencing net positive, negative, or neutral welfare. Examples:
It’s winter. The agent is cold indoors and is trying to move to the warm parts of the room. We assume its welfare is net negative. But it doesn’t go outside because it’s even colder outside.
The agent is indoors having a party. We assume it’s experiencing net positive welfare. It doesn’t want to go outside because the party is inside.
We can reproduce the behavior of these agents with reward/punishment values that are all positive numbers, all negative numbers, or a combination of the two. So if we omit the higher-level thoughts of the agents and just focus on the reward numbers at an abstract level, it doesn’t seem like we can meaningfully distinguish positive or negative welfare. Hence, the sign of welfare must come from the richer context that our human-centered knowledge and evaluations bring?
Of course, qualia nonrealists already knew that the sign and magnitude of an organism’s welfare are things we make up. But most people can agree upon, e.g., the sign of the welfare of the person at the party. In contrast, there doesn’t seem to be a principled way that most people would agree upon for us to attribute a sign of welfare to a simple RL agent that reproduces the high-level behavior of the person at the party.
After some clarification Dayan thinks that vigour is not the thing I was looking for.
We discussed this a bit further and he suggested that the temporal difference error does track pretty closely what we mean by happiness/suffering, at least as far as the zero point is concerned. Here’s a paper making the case (but it has limited scope IMO).
If that’s true, we wouldn’t need e.g. the theory that there’s a zero point to keep firing rates close to zero.
The only problem with TD errors seems to be that they don’t account for the difference between wanting and liking. But it’s currently just unresolved what the function of liking is. So I came away with the impression that liking vs wanting and not the zero point is the central question.
I’ve seen one paper suggesting that liking is basically the consumption of rewards, which would bring us back to the question of the zero point though. But we didn’t find that theory satisfying. E.g. food is just a proxy for survival. And as the paper I linked shows, happiness can follow TD errors even when no rewards are consumed.
Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon. I don’t know if that would mean that liking has no function.
Daswani and Leike (2015) also define (p. 4) happiness as the temporal difference error (in an MDP), and for model-based agents, the definition is, in my interpretation, basically the common Internet slogan that “happiness = reality—expectations”. However, the authors point out (p. 2) that pleasure = reward != happiness. This still leaves open the issue of what pleasure is.
Personally I think pleasure is more morally relevant. In Tomasik (2014), I wrote (p. 11):
After training, dopamine spikes when a cue appears signaling that a reward will arrive, not when the reward itself is consumed [Schultz et al., 1997], but we know subjectively that the main pleasure of a reward comes from consuming it, not predicting it. In other words, in equation (1), the pleasure comes from the actual reward r, not from the amount of dopamine δ.
In this post commenting on Daswani and Leike (2015), I said:
I personally don’t think the definition of “happiness” that Daswani and Leike advance is the most morally relevant one, but the authors make an interesting case for their definition. I think their definition corresponds most closely with “being pleased of one’s current state in a high-level sense”. In contrast, I think raw pleasure/pain is most morally significant. As a simple test, ask whether you’d rather be in a state where you’ve been unexpectedly notified that you’ll get a cookie in a few minutes or whether you’d rather be in the state where you actually eat the cookie after having been notified a few minutes earlier. Daswani and Leike’s definition considers being notified about the cookie to be happiness, while I think eating the cookie has more moral relevance.
Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon.
I’m not sure I understand, but I wrote a quick thing here inspired by this comment. Do you think that’s what he meant? If so, may I attribute him/you for the idea? It seems fairly plausible. :) Studying what separates red from blue might help shine light on this topic.
One of the authors (Peter Dayan) is my supervisor, let me know if you’d like me to ask him anything, he does a lot of RL-style modelling :)
Great! It’s not super important, but I’d be curious to know his own thoughts on the question of why pleasure and pain feel different and aren’t just a single dimension of motivation, given that you can shift all rewards up or down uniformly while keeping behavior unchanged. Here is one possible explanation, which mentions Daw et al. (2002).
I’d also be curious to know at what level of complexity / ability of artificial RL systems he would start to grant them ethical consideration.
I’ve had a look into Dayan’s suggested papers—they imply an interesting theory. I’ll put my thoughts here so the discussion can be public. The theory contradicts the one you link above where the separation between pain and pleasure is a contingency of how our brain works.
You’ve written about another (very intuitive) theory, where the zero-point is where you’d be indifferent between prolonging and ending your life:
“This explanation may sound plausible due to its analogy to familiar concepts, but it seems to place undue weight on whether an agent’s lifetime is fixed or variable. Yet I would still feel pain and pleasure as being distinct even if I knew exactly when I would die, and a simple RL agent has no concept of death to begin with.”
Dayan’s research suggests that the zero-point will also come up in many circumstances relating to opportunity costs which would deal with that objection. To simplify, let’s say the agent expects a fixed average rate of return rho for the foreseeable future. It is faced with a problem where it can either act fast (high energy expenditure) or act slowly (high opportunity costs as it won’t get the average return for a while). If rho is negative or zero, there is no need to act quickly at all because there are not opportunity costs. But the higher the opportunity costs get, the fast the agent will want to be at getting its average reward back so it will act quickly despite the immediate cost.
The speed with which the agent acts is called vigour in Dayan’s research. The agent’s vigour mathematically implies an average rate of return if the agent is rational. There can be other reasons for low vigour such as a task that requires patience—they have some experiments here in figure 1. In their experiment the optimal vigour (one over tao*) is proportional to the square root of the average return. A recent paper has confirmed the predictions of this model in humans.
So when is an agent happy according to this model?
The model would imply that the agent has positive welfare positive welfare when the agent treats it as creating positive opportunity costs while it’s doing other things (and vice versa for negative welfare). This would also apply to your example where the agent expends resources to increase or decrease its life-time.
What I like about this is that the welfare depends on the agent’s behaviour and not the way the rewards are internally processed and represented as numbers which is arbitrary.
I’m still not sure how you would go about calculating the welfare of an agent if you don’t have a nice experimental setup like Dayan’s. That might be amenable to more thinking. Moreover, all welfare is still relative and it doesn’t allow comparisons between agents.
Edit: I’m not sure though if there’s a problem because we now have to assume that the ‘inactive’ time where the agent doesn’t get its average reward is the zero-baseline which is also arbitrary.
Thanks!! Interesting. I haven’t read the linked papers, so let me know if I don’t understand properly (as I probably don’t).
I’ve always thought of simple RL agents as getting a reward at fixed time intervals no matter what they do, in which case they can’t act faster or slower. For example, if they skip pressing a lever, they just get a reward of 0 for that time step. Likewise, in an actual animal, the animal’s reward neurons don’t fire during the time when the lever isn’t being pressed, which is equivalent to a reward of 0.
Of course, animals would prefer to press the lever more often to get a positive reward rather than a reward of 0, but this would be true whether the lever gave positive reward or merely relief from punishment. For example, maybe the time between lever presses is painful, and the pressed lever is merely less painful. This could be the experience of, e.g., a person after a breakup consuming ice cream scoops at a higher rate than normal to escape her pain: even with the increased rate of ice cream intake, she may still have negative welfare, just less negative. It seems like vigor just says that what you’re doing is better than not doing it?
For really simple RL agents like those living in Grid World, there is no external clock. Time is sort of defined by when the agent takes its next step. So it’s again not clear if a “rate of actions” explanation can help here (but if it helps for more realistic RL agents, that’s cool!).
This answer says that for a Markov Decision Process, “each action taken is done in a time step.” So it seems like a time step is defined as the interval between one action and the next?
Thanks for the reply. I think I can clarify the issue about discrete time intervals. I’d be curious on your thoughts on the last sentence of my comment above if you have any.
Discrete time
Yes. But in a SEMI or a https://en.wikipedia.org/wiki/Markov_decision_process#Continuous-time_Markov_Decision_Process Markov Decision Process (SMDP) this is not the case. SMDPs allow temporally extended actions and are commonly used in RL research. Dayan’s papers use a continuous SMDP. You can still have RL agents in this formalism and it tracks our situation more closely. But I don’t think the formalism matters for our discussion because you can arbitrarily approximate any formalism with a standard MDP—I’ll explain below.
The continuous-time experiment looks roughly like this: Imagine you’re in a room and you have to press a lever to get out—and get back to what you would normally be doing and get an average reward rho per second. However, the lever is hard to press. You can press it hard and fast or light and slowly, taking a total time T to complete the press. The total energy cost of pressing is 1/T so ideally you’d press very slowly but that would mean you couldn’t be outside the room during that time (opportunity costs).
In this setting, the ‘action’ is just the time T that you to press the lever. We can easily approximate this with a standard MDP. E.g. you could take action 1 which completely presses the lever in one time step, costing you 1/1=1 reward in energy. Or you could take action 2, which you would have to take twice to complete the press, costing you only 1⁄2 reward (so 1⁄4 for each time you take action 2). And so forth. Does that make sense?
Zero point
Of course, if you don’t like it outside the room at all, you’ll never press the lever—so there is a ‘zero point’ in terms of how much you like it outside. Below that point you’ll never press the lever.
I’m not entirely sure what you mean, but I’ll clarify that acting vigorously doesn’t say anything about whether the agent is currently happy. It may well act vigorously just to escape punishment. Similarly, an agent that currently works to increase its life-time doesn’t necessarily feel good, but its work still implies that it thinks the additional life-time it gets will be good.
But I think your criticism may be the same as what I said in the edit above—that there is an unwarranted assumption that the agent is at the zero-point before it presses the lever. In the experiments this is assumed because there are no food rewards or shocks during that time. But you could still imaging that a depressed rat would feel bad anyway.
The theory that assumes nonexistence is the zero-point kind of does the same thing though. Although nonexistence is arguably a definite zero-point, the agent’s utility function might still go beyond its life-time...
Does this clarify the case?
Your explanation was clear. :)
Yeah, I guess I meant the trivial observation that you act vigorously if you judge that doing so has higher expected total discounted reward than not doing so. But this doesn’t speak to whether, after making that vigorous effort, your experiences will be net positive; they might just be less negative.
...assuming that sticking around inside the room is neutral. This gets back to the “unwarranted assumption that the agent is at the zero-point before it presses the lever.”
Hm. :) I feel like there’s a difference between (a) an agent inside the room who hasn’t yet pressed the lever to get out and (b) the agent not existing at all. For (a), it seems we ought to be able to give a (qualia and morally nonrealist) answer about whether its experiences are positive or negative or neutral, while for (b), such a question seems misplaced.
If it were a human in the room, we could ask that person whether her experiences before lever pressing were net positive or negative. I guess such answers could vary a lot between people based on various cultural, psychological, etc. factors unrelated to the activity level of reward networks. If so, perhaps one position could be that the distinction between positive vs. negative welfare is a pretty anthropomorphic concept that doesn’t travel well outside of a cognitive system capable of making these kinds of judgments. Intuitively, I feel like there is more to the sign of one’s welfare than these high-level, potentially idiosyncratic evaluations, but it’s hard to say what.
I suppose another approach could be to say that the person in the room definitely is at welfare 0 (by fiat) based on lack of reward or punishment signals, regardless of how the person evaluates her welfare verbally.
Yes that’s probably the right way to think about it. I’m also considering an alternative though: Since we’re describing the situation with a simple computational model we shouldn’t assume that there’s anything going on that isn’t captured by the model. E.g. if the agent in the room is depressed, it will be performing ‘mental actions’ - imagining depressing scenarios etc. But we may have to assume that away, similar to how high school physics would assume no friction etc.
So we’re left with an agent that decides initially that it won’t do anything at all (not even updating its beliefs) because it doesn’t want to be outside of the room and then remains inactive. The question arises if that’s an agent at all and if it’s meaningfully different unconsciousness.
Hm. :) Well, what if the agent did do stuff inside the room but still decided not to go out? We still wouldn’t be able to tell if it was experiencing net positive, negative, or neutral welfare. Examples:
It’s winter. The agent is cold indoors and is trying to move to the warm parts of the room. We assume its welfare is net negative. But it doesn’t go outside because it’s even colder outside.
The agent is indoors having a party. We assume it’s experiencing net positive welfare. It doesn’t want to go outside because the party is inside.
We can reproduce the behavior of these agents with reward/punishment values that are all positive numbers, all negative numbers, or a combination of the two. So if we omit the higher-level thoughts of the agents and just focus on the reward numbers at an abstract level, it doesn’t seem like we can meaningfully distinguish positive or negative welfare. Hence, the sign of welfare must come from the richer context that our human-centered knowledge and evaluations bring?
Of course, qualia nonrealists already knew that the sign and magnitude of an organism’s welfare are things we make up. But most people can agree upon, e.g., the sign of the welfare of the person at the party. In contrast, there doesn’t seem to be a principled way that most people would agree upon for us to attribute a sign of welfare to a simple RL agent that reproduces the high-level behavior of the person at the party.
After some clarification Dayan thinks that vigour is not the thing I was looking for.
We discussed this a bit further and he suggested that the temporal difference error does track pretty closely what we mean by happiness/suffering, at least as far as the zero point is concerned. Here’s a paper making the case (but it has limited scope IMO).
If that’s true, we wouldn’t need e.g. the theory that there’s a zero point to keep firing rates close to zero.
The only problem with TD errors seems to be that they don’t account for the difference between wanting and liking. But it’s currently just unresolved what the function of liking is. So I came away with the impression that liking vs wanting and not the zero point is the central question.
I’ve seen one paper suggesting that liking is basically the consumption of rewards, which would bring us back to the question of the zero point though. But we didn’t find that theory satisfying. E.g. food is just a proxy for survival. And as the paper I linked shows, happiness can follow TD errors even when no rewards are consumed.
Dayan mentioned that liking may even be an epiphenomenon of some things that are going on in the brain when we eat food/have sex etc, similar to how the specific flavour of pleasure we get from listening to music is such an epiphenomenon. I don’t know if that would mean that liking has no function.
Any thoughts?
Interesting. :)
Daswani and Leike (2015) also define (p. 4) happiness as the temporal difference error (in an MDP), and for model-based agents, the definition is, in my interpretation, basically the common Internet slogan that “happiness = reality—expectations”. However, the authors point out (p. 2) that pleasure = reward != happiness. This still leaves open the issue of what pleasure is.
Personally I think pleasure is more morally relevant. In Tomasik (2014), I wrote (p. 11):
In this post commenting on Daswani and Leike (2015), I said:
I’m not sure I understand, but I wrote a quick thing here inspired by this comment. Do you think that’s what he meant? If so, may I attribute him/you for the idea? It seems fairly plausible. :) Studying what separates red from blue might help shine light on this topic.