Do you agree with Susan Wolf’s claim in Moral Saints that we ought to consider non-moral values in deciding what we do, and those may be a valid reason to not given more to the worst off? I presume you’ve written about it before on your blog or in a paper.
A related question is: do you think the gap is greater between our actions and what we think we ought to do, or between what we think ought to do and what we ought to do in some realist meta-normative sense? Is the bigger issue that we lack moral knowledge, or that we don’t live up to moral standards?
I’m open to the possibility that what’s all things considered best might take into account other kinds of values beyond traditionally welfarist ones (e.g. Nietzschean perfectionism). But standard sorts of agent-relative reasons like Wolf adverts to (reasons to want your life in particular to be more well-rounded) strike me as valid excuses rather than valid justifications. It isn’t really a better decision to do the more selfish thing, IMO.
Your second paragraph is hard to answer because different people have different moral beliefs, and (as I suggest in the OP) laxer moral beliefs often stem from motivated reasoning. So the two may be intertwined. But obviously my hope is that greater clarity of moral knowledge may help us to do more good even with limited moral motivation.
Do you agree with Susan Wolf’s claim in Moral Saints that we ought to consider non-moral values in deciding what we do, and those may be a valid reason to not given more to the worst off? I presume you’ve written about it before on your blog or in a paper.
A related question is: do you think the gap is greater between our actions and what we think we ought to do, or between what we think ought to do and what we ought to do in some realist meta-normative sense? Is the bigger issue that we lack moral knowledge, or that we don’t live up to moral standards?
I’m open to the possibility that what’s all things considered best might take into account other kinds of values beyond traditionally welfarist ones (e.g. Nietzschean perfectionism). But standard sorts of agent-relative reasons like Wolf adverts to (reasons to want your life in particular to be more well-rounded) strike me as valid excuses rather than valid justifications. It isn’t really a better decision to do the more selfish thing, IMO.
Your second paragraph is hard to answer because different people have different moral beliefs, and (as I suggest in the OP) laxer moral beliefs often stem from motivated reasoning. So the two may be intertwined. But obviously my hope is that greater clarity of moral knowledge may help us to do more good even with limited moral motivation.