Hey Vasco, on a constructive intention, let me explain how I believe I can be a utilitarian, maybe hedonistic to some degree, value animals highly and still not justify letting innocent children die, which I take as a sign of the limitations of consequentialism. Basically, you can stop consequence flows (or discount them very significantly) whenever they go through other people’s choices. People are free to make their own decisions. I am not sure if there is a name for this moral theory, but it would be roughly what I subscribe to.
I do not think this is an ideal solution to the moral problem, but I think it is much better than advocating to let innocent children die because of what they may end up doing.
Thanks, Pablo. I appreciate the effort to be construcive. However, I have a hard time parsing what you are suggesting. All moral actions depend on humans’ decisions to some extent, so it looks like everything would be in equal footing. One could argue we should discount more consequences which depend more on humans’ decisions, but I do not understand what this means. In my mind, one should simply weight consequences according to their probabilities.
Hey Vasco, on a constructive intention, let me explain how I believe I can be a utilitarian, maybe hedonistic to some degree, value animals highly and still not justify letting innocent children die, which I take as a sign of the limitations of consequentialism. Basically, you can stop consequence flows (or discount them very significantly) whenever they go through other people’s choices. People are free to make their own decisions. I am not sure if there is a name for this moral theory, but it would be roughly what I subscribe to.
I do not think this is an ideal solution to the moral problem, but I think it is much better than advocating to let innocent children die because of what they may end up doing.
Thanks, Pablo. I appreciate the effort to be construcive. However, I have a hard time parsing what you are suggesting. All moral actions depend on humans’ decisions to some extent, so it looks like everything would be in equal footing. One could argue we should discount more consequences which depend more on humans’ decisions, but I do not understand what this means. In my mind, one should simply weight consequences according to their probabilities.