Two subcategories of idea 3 that I see, and my steelman of each:
3a. To maximize good, it’s incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.
3b. “Good” cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it’s maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn’t make sense, the decisions are still morally correct.
Two subcategories of idea 3 that I see, and my steelman of each:
3a. To maximize good, it’s incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.
3b. “Good” cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it’s maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn’t make sense, the decisions are still morally correct.
I think something like 3a is right, especially given our cluelessness.