Reading closer, I would separately note that I think there is some semantic ambiguity in how you and others describe extreme optimizers.
I think that an agent that’s “intensely maximizing for a goal that can be put into numbers in order to show that it’s optimal” can still be incredibly humble and reserved.
Holden writes, “Can we avoid these pitfalls by “just maximizing correctly?” and basically answers no, but his alternative proposal is to “apply a broad sense of pluralism and moderation to much of what they do”.
I think that very arguably, Holden is basically saying, “The utility we’d get from executing [pluralism and moderation] strategy is greater than we would be executing [naive narrow optimization] strategy, so we should pursue the former”. To me, this can easily be understood as a form of “utility optimization over utility optimization strategies.” So Holden’s resulting strategy can still be considered utility optimization, in my opinion.
Reading closer, I would separately note that I think there is some semantic ambiguity in how you and others describe extreme optimizers.
I think that an agent that’s “intensely maximizing for a goal that can be put into numbers in order to show that it’s optimal” can still be incredibly humble and reserved.
Holden writes, “Can we avoid these pitfalls by “just maximizing correctly?” and basically answers no, but his alternative proposal is to “apply a broad sense of pluralism and moderation to much of what they do”.
I think that very arguably, Holden is basically saying, “The utility we’d get from executing [pluralism and moderation] strategy is greater than we would be executing [naive narrow optimization] strategy, so we should pursue the former”. To me, this can easily be understood as a form of “utility optimization over utility optimization strategies.” So Holden’s resulting strategy can still be considered utility optimization, in my opinion.