I’ve done research in reinforcement learning, and I can say this sort of behavior is very common and expected. I was working on a project once and incorrectly programmed the reward function, leading the agent to kill itself rather than explore the environment so that it could avoid an even greater negative reward from sticking around. I didn’t consider this very notable, because when I thought about the reward function, it was obvious this would happen.
Here is a spreadsheet with a really long list of examples of this kind of specification gaming. I suspect the reason your grant was rejected is because if the agent did as you suspect it would, this wouldn’t provide much original insight beyond what people have already found. I do think many of the examples are in “toy” environments and it might be interesting to observe more behavior like this in more complex enviroments.
It might still be useful for your own learning to implement this yourself!
Welcome to the forum!
I’ve done research in reinforcement learning, and I can say this sort of behavior is very common and expected. I was working on a project once and incorrectly programmed the reward function, leading the agent to kill itself rather than explore the environment so that it could avoid an even greater negative reward from sticking around. I didn’t consider this very notable, because when I thought about the reward function, it was obvious this would happen.
Here is a spreadsheet with a really long list of examples of this kind of specification gaming. I suspect the reason your grant was rejected is because if the agent did as you suspect it would, this wouldn’t provide much original insight beyond what people have already found. I do think many of the examples are in “toy” environments and it might be interesting to observe more behavior like this in more complex enviroments.
It might still be useful for your own learning to implement this yourself!