Neat stuff here—thank you for the thoughtful comment!
I agree that few people believe that their choice of intervention is actually the most useful and that we often lavish praise onto people who do just a lot of good. For example, many people consider characters like Warren Buffett and Bill Gates very praiseworthy because, even though they have private jets, they still do a lot of good.
I also agree that maximization ought not be reveled as an imperative. An imperative to maximize, like thick consequentialism as a moral theory generally, is too demanding. Following this, I struggle to see why we need it at all. Folks who truly want to do a lot of good will still perform optimization calculations even if they aren’t explicitly trying to maximize. This makes maximization neither a normative nor descriptive part of anything we do.
In your example about “the same money could carry you much further towards your goal if you did X”, there is no maximization rhetoric present. If you were using maximization as a “wrong but useful” model, you would likely say something like, “I deduced that the same money could carry you farthest if you did X, so don’t give to community theater and don’t do Y or Z either unless you show me why they’re more effective than X”.
As an analogy, you don’t have to try to be the best philosopher of all time in order to produce great thinking.
Neat stuff here—thank you for the thoughtful comment!
I agree that few people believe that their choice of intervention is actually the most useful and that we often lavish praise onto people who do just a lot of good. For example, many people consider characters like Warren Buffett and Bill Gates very praiseworthy because, even though they have private jets, they still do a lot of good.
I also agree that maximization ought not be reveled as an imperative. An imperative to maximize, like thick consequentialism as a moral theory generally, is too demanding. Following this, I struggle to see why we need it at all. Folks who truly want to do a lot of good will still perform optimization calculations even if they aren’t explicitly trying to maximize. This makes maximization neither a normative nor descriptive part of anything we do.
In your example about “the same money could carry you much further towards your goal if you did X”, there is no maximization rhetoric present. If you were using maximization as a “wrong but useful” model, you would likely say something like, “I deduced that the same money could carry you farthest if you did X, so don’t give to community theater and don’t do Y or Z either unless you show me why they’re more effective than X”.
As an analogy, you don’t have to try to be the best philosopher of all time in order to produce great thinking.