Yes, but if at some point you find out, for example, that your model of morality leads to a conclusion that one should kill all humans, you’d probably conclude that your model is wrong rather than actually go through with it.
It’s an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god’s commands is better than ignoring them, etc.
Yes, but if at some point you find out, for example, that your model of morality leads to a conclusion that one should kill all humans, you’d probably conclude that your model is wrong rather than actually go through with it.
It’s an extreme example, but at its basis every model is somehow an approximation stemming from our internal moral intuition. Be it that life is better than death, or happiness better than pain, or satisfying desires better than frustration, or that following god’s commands is better than ignoring them, etc.