Hm… I appreciate what you may be getting at, but I think that post itself doesn’t exactly say it’s bad, but rather that the specific thing that what one does to maximize probably isn’t exactly the best possible thing (though it could still be the best possible guess).
In many areas maximizing as a general heuristic is pretty great. I wouldn’t mind maximizing income and happiness within reasonable limits. But maximization can of course be dangerous, as is true for many decision functions.
To say it’s usually a bad idea would be to assume a reference class of possible things to maximize, which seems hard for me to visualize.
Hm… I appreciate what you may be getting at, but I think that post itself doesn’t exactly say it’s bad, but rather that the specific thing that what one does to maximize probably isn’t exactly the best possible thing (though it could still be the best possible guess).
In many areas maximizing as a general heuristic is pretty great. I wouldn’t mind maximizing income and happiness within reasonable limits. But maximization can of course be dangerous, as is true for many decision functions.
To say it’s usually a bad idea would be to assume a reference class of possible things to maximize, which seems hard for me to visualize.