saving money while searching for the maximum seems bad
In the sense of “maximizing” you’re using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc.
However, I think the sense of “maximizing” used in the post you’re responding to, and more broadly in EA when people talk about “maximizing ethics”, is quite different. I understand it to mean something more like “doing the most good possible”—not aiming to clear a certain threshold, or trading off with other ethical or non-ethical priorities. It’s a philosophical commitment that says “even if you’re already saved a hundred lives, it’s just as ethically important to save one more. You’re not done.”
It’s possible that a commitment to a maximizing philosophy can lead people to adopt a mindset like the one you describe in this post—to the extent that that’s true I don’t disagree at all that they’re making a mistake. But I think there may be a terminological mismatch here that will lead to illusory disagreements.
I think you’re right that there are two meanings, and I’m primarily pointing to the failures on the more obviously bad level. But your view—that no given level is good enough, and we need to do marginally more—is still not equivalent to the maximizing view that I see and worry about. The view I’m talking about is an imperative to only do the best thing, not to do lesser good things.
And I think that the conception of binary effectiveness usually leads to the failure modes I pointed out. Unless and until the first half of Will’s Effective Altruism is complete—an impossible goal, in my view—we need to ensure that we’re doing more good at each step, not trying to ensure we do the most good, and nothing less.
In the sense of “maximizing” you’re using here, I agree entirely with this post. Aiming for the very best option according to a particular model and pushing solely on that as hard as you can will expose you to Goodhart problems, diminishing returns, model violations, etc.
However, I think the sense of “maximizing” used in the post you’re responding to, and more broadly in EA when people talk about “maximizing ethics”, is quite different. I understand it to mean something more like “doing the most good possible”—not aiming to clear a certain threshold, or trading off with other ethical or non-ethical priorities. It’s a philosophical commitment that says “even if you’re already saved a hundred lives, it’s just as ethically important to save one more. You’re not done.”
It’s possible that a commitment to a maximizing philosophy can lead people to adopt a mindset like the one you describe in this post—to the extent that that’s true I don’t disagree at all that they’re making a mistake. But I think there may be a terminological mismatch here that will lead to illusory disagreements.
I think you’re right that there are two meanings, and I’m primarily pointing to the failures on the more obviously bad level. But your view—that no given level is good enough, and we need to do marginally more—is still not equivalent to the maximizing view that I see and worry about. The view I’m talking about is an imperative to only do the best thing, not to do lesser good things.
And I think that the conception of binary effectiveness usually leads to the failure modes I pointed out. Unless and until the first half of Will’s Effective Altruism is complete—an impossible goal, in my view—we need to ensure that we’re doing more good at each step, not trying to ensure we do the most good, and nothing less.