EA is about maximization, and maximization is perilous
This is not a contest submission; I don’t think it’d be appropriate for me to enter this contest given my position as a CEA funder. This also wasn’t really inspired by the contest—I’ve been thinking about writing something like this for a little while—but I thought the contest provided a nice time to put it out.
This piece generally errs on the concise side, gesturing at intuitions rather than trying to nail down my case thoroughly. As a result, there’s probably some nuance I’m failing to capture, and hence more ways than usual in which I would revise my statements upon further discussion.
For most of the past few years, I’ve had the following view on EA criticism:
Most EA criticism is—and should be—about the community as it exists today, rather than about the “core ideas.”
The core ideas are just solid. Do the most good possible—should we really be arguing about that?
Recently, though, I’ve been thinking more about this and realized I’ve changed my mind. I think “do the most good possible” is an intriguing idea, a powerful idea, and an important idea—but it’s also a perilous idea if taken too far. My basic case for this is that:
If you’re maximizing X, you’re asking for trouble by default. You risk breaking/downplaying/shortchanging lots of things that aren’t X, which may be important in ways you’re not seeing. Maximizing X conceptually means putting everything else aside for X—a terrible idea unless you’re really sure you have the right X. (This idea vaguely echoes some concerns about AI alignment, e.g., powerfully maximizing not-exactly-the-right-thing is something of a worst-case event.)
EA is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA. By default, that seems like a recipe for trouble.
The upshot is that I think the core ideas of EA present constant temptations to create problems. Fortunately, I think EA mostly resists these temptations—but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls. As EA grows, this could be a fragile situation.
I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have—a constant current that we need to swim against.
How things would go if we were maximally “hard-core”
The general conceptual points behind my critique—“maximization is perilous unless you’re sure you have the right maximand” and “EA is centrally about maximizing something that we can’t define or measure and have massive disagreements about”—are hopefully reasonably clear and sufficiently explained above.
To make this more concrete, I’ll list just some examples of things I think would be major problems if being “EA” meant embracing the core ideas of EA without limits or reservations.
We’d have a bitterly divided community, with clusters having diametrically opposed goals.
For example:
Many EAs think that “do the most good possible” ends up roughly meaning “Focus on the implications of your actions for the long-run future.”
Within this set, some EAs essentially endorse: “The more persons there are in the long-run future, the better it is” while others endorse something close to the opposite: “The more persons there are in the long-run future, the worse it is.”1
In practice, it seems that people in these two camps try hard to find common ground and cooperate. But it’s easy to envision a version of EA so splintered by this sort of debate that learning someone is an EA most-often tells you that they are dedicated to maximizing something other than what you’re dedicated to maximizing, and that you should take a fundamentally adversarial and low-trust stance toward each other.
We’d have a community full of low-integrity people, and “bad people” as most people define it.
A lot of EAs’ best guess at the right maximand is along the lines of utilitarianism.
Does utilitarianism recommend that we communicate honestly, even when this would make our arguments less persuasive and cause fewer people to take action based on them? Or does utilitarianism recommend that we “say whatever it takes” to e.g. get people to donate to the charities we estimate to be best?
Does utilitarianism recommend that we stick to promises we made? Or does utilitarianism recommend that we go ahead and break them when this would free us up to pursue our current best-guess actions?
It seems that the answers to these questions are, at best, unclear, and different people have very different takes on them. In general, it seems to be extremely uncertain and debatable what utilitarianism says about a given decision, especially from a longtermist point of view.
Even if, say, 80% of utilitarian EAs thought that utilitarianism supported honesty and integrity, while 20% thought it did not, I think the result would be a noticeably and unusually low-integrity community, full of deception and “bad actors” by the standards of its reference class. I also think that high-integrity behavior works better in a setting where it’s common; 20% of EAs behaving in noticeably low-integrity ways might change the calculus for the other 80%, making things worse still.
My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it. (One hypothesis I’ve heard is that people who care a lot about holding themselves to a high ethical standard, and achieving consistency between their views, statements and actions, are drawn to both utilitarianism and high-integrity behavior for this reason.)
We’d probably have other issues that should just generally give us pause.
I’d expect someone who is determined to derive all their actions from utilitarianism—or from any other explicit maximization of some best-guess maximand—to be at high risk of things like being a bad friend (e.g., refusing to do inconvenient or difficult things when a friend is in need), bad partner (same), narrow thinker (not taking an interest in topics that don’t have clear relevance to the maximand), etc. This is because I doubt there is a maximand and calculation method available that reliably replicates all of the many heuristics people use to be “virtuous” on a variety of dimensions.
Can we avoid these pitfalls by “just maximizing correctly?”
You could argue that for nearly any maximand, it’s a good idea to be the sort of person other people trust and like; to keep lots of life options open; and generally to avoid the sorts of behaviors I worry about above, unless you’re quite confident that you have a calculation pointing that way.
You could make a case for this either from an instrumental point of view (“Doing these things will ultimately make you better at maximizing your maximand”) or using various “nonstandard decision theories” that some EAs are fond of (including myself to some degree).
But I doubt you can make a case that’s robustly compelling and is widely agreed upon, enough to prevent the dynamics I worry about above. Especially if you accept my claim that a significant minority of people behaving badly can be extremely bad. To the extent that the EA community is avoiding these pitfalls, I don’t think this is enough to explain it.
(I do in fact think—though not with overwhelming confidence—that the “pitfalls” I describe would be bad for most plausible EA maximands. How can I simultaneously think this, while also fearing that non-tempered acceptance of EA would result in these “pitfalls?” I address this in a brief appendix.)
Avoiding the pitfalls
I think the EA community does mostly avoid these pitfalls—not in the sense that the dynamics I worry about are absent, but in the sense that they don’t seem more common among EAs than in other analogous communities.
I think a major reason for this is simply that most EAs are reasonable, non-fanatical human beings, with a broad and mixed set of values like other human beings, who apply a broad sense of pluralism and moderation to much of what they do.
My sense is that many EAs’ writings and statements are much more one-dimensional and “maximizy” than their actions. Most EAs seem to take action by following a formula something like: “Take a job at an organization with an unusually high-impact mission, which it pursues using broadly accepted-by-society means even if its goals are unusual; donate an unusual amount to carefully chosen charities; maybe have some other relatively benign EA-connected habits like veganism or reducetarianism; otherwise, behave largely as I would if I weren’t an EA.”
I’m glad things are this way, and with things as they stand, I am happy to identify as an EA. But I don’t want to lose sight of the fact that EA likely works best with a strong dose of moderation. The core ideas on their own seem perilous, and that’s an ongoing challenge.
And I’m nervous about what I perceive as dynamics in some circles where people seem to “show off” how little moderation they accept—how self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals. I think this dynamic is positive at times and fine in moderation, but I do think it risks spiraling into a problem.
Brief appendix: spreading the goal of maximizing X can be bad for the goal of maximizing X
There’s a potentially confusing interplay of arguments here. To some degree, I’m calling certain potential dynamics “pitfalls” because I think they would (in fact) be bad for most plausible EA maximands. You might think something like: “Either these dynamics would be bad for the right maximand, in which case you can’t complain that a maximizing mindset is the problem (since proper maximizing would avoid the pitfalls) … or they wouldn’t be bad for the right maximand, and maybe that means they’re just good.” I have a couple of responses to this:
First, I think the “pitfalls” above are just broadly bad and should give us pause. The fact that a low-trust, bitterly divided EA community would probably be less effective is part of why I think it would be a bad thing, but only part of it. I think honesty is good partly because it seems usually instrumentally valuable, but I also think it’s just good, and would have some trouble being totally comfortable with any anti-honesty conclusion even if the reasoning seemed good.
Second, I think you can simultaneously believe “X would be bad for the maximand we care about” and “Broadly promoting and accepting a goal of maximizing that maximand would cause X.” EA isn’t just a theoretical principle, it’s a set of ideas and messages that are intended to be broadly spread and shared. It’s not contradictory to believe that spreading a goal could be bad for the goal, and it seems like a live risk here.
Notes
-
I’m thinking of classical utilitarianism for the former, suffering-focused ethics for the latter. Some additional assumptions are needed to reach the positions I list. In particular, some assumption (which I find very plausible) along the lines of: “If we condition on the long-run future having lots of persons in it, most of those lives are probably at least reasonably good [the persons would prefer to exist vs. not exist], but there’s a significant risk that at least some are very bad.” ↩
- CEA will continue to take a “principles-first” approach to EA by 20 Aug 2024 11:15 UTC; 353 points) (
- A personal statement on FTX by 12 Nov 2022 16:40 UTC; 289 points) (
- Saving drowning children in light of perilous maximisation by 2 Apr 2023 0:46 UTC; 277 points) (
- Doing EA Better by 17 Jan 2023 20:09 UTC; 261 points) (
- Reflecting on the Last Year — Lessons for EA (opening keynote at EAG) by 24 Mar 2023 15:35 UTC; 261 points) (
- The Capability Approach to Human Welfare by 13 Jan 2023 15:15 UTC; 246 points) (
- Causes and Uncertainty: Rethinking Value in Expectation by 11 Oct 2023 9:15 UTC; 220 points) (
- 16 May 2023 14:52 UTC; 198 points) 's comment on Habiba’s Quick takes by (
- Getting on a different train: can Effective Altruism avoid collapsing into absurdity? by 7 Oct 2022 10:52 UTC; 187 points) (
- What I learned from the criticism contest by 1 Oct 2022 13:39 UTC; 170 points) (
- Impact obsession: Feeling like you never do enough good by 23 Aug 2023 11:32 UTC; 155 points) (
- Holden Karnofsky’s recent comments on FTX by 24 Mar 2023 11:44 UTC; 149 points) (
- “Doing Good Best” isn’t the EA ideal by 16 Sep 2022 12:06 UTC; 128 points) (
- We can do better than argmax by 10 Oct 2022 10:32 UTC; 113 points) (
- 21 Mar 2023 2:05 UTC; 106 points) 's comment on My takes on the FTX situation will (mostly) be cold, not hot by (
- The possibility of an indefinite AI pause by 19 Sep 2023 12:28 UTC; 90 points) (
- Critique of the notion that impact follows a power-law distribution by 14 Mar 2024 10:28 UTC; 88 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- Reasons for optimism about measuring malevolence to tackle x- and s-risks by 2 Apr 2024 10:26 UTC; 85 points) (
- New Epistemics Tool: ThEAsaurus by 1 Apr 2024 15:32 UTC; 79 points) (
- EA may look like a cult (and it’s not just optics) by 1 Oct 2022 13:07 UTC; 77 points) (
- Review of AI Alignment Progress by 7 Feb 2023 18:57 UTC; 72 points) (LessWrong;
- FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’ by 25 Nov 2022 17:24 UTC; 66 points) (
- To what extent & how did EA indirectly contribute to financial crime—and what can be done now? One attempt at a review by 13 Apr 2024 5:55 UTC; 63 points) (
- Order Matters for Deceptive Alignment by 15 Feb 2023 19:56 UTC; 57 points) (LessWrong;
- EA & LW Forums Weekly Summary (28 Aug − 3 Sep 22’) by 6 Sep 2022 11:06 UTC; 51 points) (LessWrong;
- We can do better than argmax by 10 Oct 2022 10:32 UTC; 49 points) (LessWrong;
- EA & LW Forums Weekly Summary (28 Aug − 3 Sep 22’) by 6 Sep 2022 10:46 UTC; 42 points) (
- Wholesomeness and Effective Altruism by 28 Feb 2024 20:28 UTC; 42 points) (LessWrong;
- Value capture by 26 Oct 2024 22:46 UTC; 37 points) (
- Wholesomeness and Effective Altruism by 28 Feb 2024 20:28 UTC; 32 points) (
- 20 Nov 2023 13:41 UTC; 29 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
- Should EA have a career-focused “Do the most good” pledge? by 20 Jul 2021 13:47 UTC; 28 points) (
- It is called Effective Altruism, not Altruistic Effectiveness by 20 Dec 2023 15:41 UTC; 27 points) (
- 15 Mar 2024 21:56 UTC; 25 points) 's comment on Unflattering aspects of Effective Altruism by (
- 10 Nov 2022 13:41 UTC; 23 points) 's comment on Money Stuff: FTX Had a Death Spiral by (
- The Bulwark’s Article On Effective Altruism Is a Short Circuit by 27 Sep 2023 1:36 UTC; 21 points) (
- 13 Dec 2022 1:33 UTC; 20 points) 's comment on Reflections on Vox’s “How effective altruism let SBF happen” by (
- 27 Sep 2023 19:32 UTC; 19 points) 's comment on Would You Work Harder In The Least Convenient Possible World? by (LessWrong;
- 18 Jan 2023 13:37 UTC; 18 points) 's comment on Doing EA Better by (
- 16 Nov 2022 19:17 UTC; 14 points) 's comment on Who’s at fault for FTX’s wrongdoing by (
- Monthly Overload of EA—October 2022 by 1 Oct 2022 12:32 UTC; 13 points) (
- As EA embraces more avenues for change, we must change our message by 21 Nov 2022 6:10 UTC; 12 points) (
- Helping animals or saving human lives in high income countries is arguably better than saving human lives in low income countries? by 21 Mar 2024 9:05 UTC; 12 points) (
- 16 Mar 2024 5:42 UTC; 12 points) 's comment on Unflattering aspects of Effective Altruism by (
- The perils of maximising the good that you do (Toby Ord on the 80,000 Hours Podcast) by 14 Sep 2023 12:14 UTC; 12 points) (
- 5 Apr 2024 8:48 UTC; 9 points) 's comment on Request for public input: The EA Handbook by (
- “EA is very open to some kinds of critique and very not open to others” and “Why do critical EAs have to use pseudonyms?” by 24 Feb 2023 17:10 UTC; 8 points) (
- 27 Jan 2023 8:46 UTC; 7 points) 's comment on How to Reform Effective Altruism after SBF Vox interview with Holden Karnofsky 1/23/2023 by (
- More to explore on ‘What do you think?’ by 9 Jul 2022 23:00 UTC; 7 points) (
- 10 Jul 2024 12:03 UTC; 6 points) 's comment on Effective altruism is stumbling. Can “moral ambition” replace it? by (
- Food for Thought 6: Maximisation is perilous by 24 Jul 2023 14:34 UTC; 5 points) (
- 9 Mar 2024 13:39 UTC; 5 points) 's comment on Distinctions when Discussing Utility Functions by (
- 7 Feb 2023 15:46 UTC; 5 points) 's comment on The number of burner accounts is too damn high by (
- 21 Jan 2023 22:44 UTC; 4 points) 's comment on Doing EA Better by (
- 21 Jan 2023 22:42 UTC; 4 points) 's comment on Doing EA Better by (
- 4 Jan 2023 20:48 UTC; 4 points) 's comment on If EA Community-Building Could Be Net-Negative, What Follows? by (
- 10 Feb 2023 8:13 UTC; 4 points) 's comment on Book Post: The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism by (
- 10 Oct 2022 13:17 UTC; 3 points) 's comment on We can do better than argmax by (
- 4 Sep 2022 14:08 UTC; 3 points) 's comment on The Happiness Maximizer: Why EA is an x-risk by (
- 15 May 2023 23:14 UTC; 3 points) 's comment on A flaw in a simple version of worldview diversification by (
- 16 Jan 2024 4:10 UTC; 3 points) 's comment on What “Saving Private Ryan” can teach us about EA by (
- 23 Sep 2023 9:00 UTC; 3 points) 's comment on The possibility of an indefinite AI pause by (
- 8 Jun 2023 9:24 UTC; 3 points) 's comment on The Three M’s: Measurement, Multiplication, Maximization by (
- 17 Nov 2022 14:58 UTC; 2 points) 's comment on Brainstorming ways to make EA safer and more inclusive by (
I strongly agree with some parts of this post, in particular:
I think integrity is extremely important, and I like that this post reinforces that.
I think it’s a great point that EA seems like it could be very bitterly divided indeed, and appreciating that we haven’t as well as thinking about why (despite our various different beliefs) seems like a great exercise. It does seem like we should try to maintain those features.
On the other hand, I disagree with some of it—and thought I’d push back especially given that there isn’t much pushback in the comments here:
I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:
As individuals, the desire to be liked by and get along with others, including people inside and outside of EA
As individuals that have been raised in a mainstream ethical environment (most of us), a natural pluralism and strong attraction to common sense morality
The desire to live a normal life full of the normal recreational, familial, and cultural stuff
As a community, wanting to seem less weird to the rest of the world in order to be able to attract and/or work with people who are (currently) unfamiliar with the EA community.
Implicit and explicit pressure from another against weirdness so that we don’t embarrass one another/hurt EA’s reputation
Fear of being badly wrong in a way that feels less excusable because it’s not the case that everyone else is also badly wrong in the same way
Whatever else is involved in the apparent phenomenon where as a community gets bigger, it often becomes less unique
We do face some sources of pressure away from pluralism and moderation, but they seem fewer and weaker to me:
The desire to seem hardcore that you mentioned
Something about a desire for interestingness/feeling interesting/specialness (possible overlap with the above)
Selection effects—EA tends to attract people who are really into consistency and following arguments wherever they lead (though I’d guess this is getting weaker over time bc of the above effects).
Maybe other things?
I do agree that we should try hard to guard against bad maximising—but I think we also need to make sure we remember what is really important about maximising in the face of pressure not to.
Also, moral and empirical uncertainty strongly favour moderation and pluralism—so I agree that it’s good to have reservations about EA ideas (though primarily in the same way it’s good to have reservations about a lot of ideas). I do not want to think of those ideas as separate from or in tension with the core ideas of EA. I think it would be better to think of them as an important part of the ideas of EA.
Somewhat speculating: I also wonder if the two problems you cite at the top are actually sort of a problem and a solution:
Maybe EA is avoiding the dangers of maximisation (insofar as we are) exactly because we are trying to maximize something we’re confused about. Since we’re confused about what ‘the good’ is, we’re constantly hedging our bets; since we’re unsure how to achieve the good, we go for robust approaches and try a variety of approaches and try not to alienate people who can help us figure out what the good is and how to make it happen. This uncertainty reduces the risks of maximisation greatly. Analogy: Stuart Russel’s strategy to make AI safe by making it unsure about its goals.
Thank you for writing this. For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point. But this post fills a large hole.
As is obligatory for me, I must mention Derek Parfit, who tends to have already well-described many ideas that resurface later.
In Reasons and Persons, Part 1 (especially Chapter 17), Derek Parfit argues that good utilitiarians should self-efface their utilitarianism. This is because people tend to have motivated reasoning, and tend to be wrong. Under utilitarianism, it is possible to justify nearly anything, provided your epistemics are reasonably bad (your epistemics would have to be very bad to justify murder under deontological theories that prohibit murder; you would have to claim that something was not in fact murder at all). Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility for that person in the long run (perhaps some theory somewhat like virtue ethics). This wasn’t an original idea, and Mill said similar things.
One way to self-efface your utilitiarianism would be to say “yeah, I know, it makes sense under utilitarianism for me to keep my promises” (or whatever it may be). Parfit suggests that may not be enough, because deep down you still believe in utilitarianism; it will come creeping through (if not in you, in some proportion of people who self-efface this way). He says that you may instead need to forget that you ever believed in utilitarianism, even if you think it’s correct. You need to believe a lie, and perhaps even convince everyone else of this lie.
He also draws an interesting caveat: what if the generally agreed upon virtues or rules are no longer those with the highest expected utility? If nobody believed in utilitarianism, why would they ever be changed? He responds:
This wasn’t an original idea either; Parfit here is making a reference to Sidgwick’s “Government House utilitarianism,” which seemed to suggest only people in power should believe utilitarianism but not spread it. Parfit passingly suggests the utilitarians don’t need to be the most powerful ones (and indeed Sidgwick’s assertion may have been motivated by his own high position).
Sometimes I think that this is the purpose of EA. To attempt to be the “few people” to believe consequentialism in a world where commonsense morality really does need to change due to a rapidly changing world. But we should help shift commonsense morality in a better direction, not spread utilitarianism.
Maybe utilitarianism is an info hazard not worth spreading. If something is worth spreading, I suspect it’s virtues.
Which virtues? Some have suggestions.
This may be clear to you, and isn’t important for the main point of your comment, but I think that ‘Government House utilitarianism’ is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick’s thought while also alluding to what Williams viewed as an objectionable feature of it.
Sigdwick himself, in The Methods of Ethics, referred to the issue as esoteric morality (pp. 489–490, emphasis mine):
In his Henry Sidgwick Memorial Lecture on 18 February 1982 (or rather the version of it included in Williams’s posthumously published essay collection The Sense of the Past), after quoting roughly the above passage from Sidgwick, Williams says:
There has since been the occasional paper mentioning or commenting on the issue, including a defense of esoteric morality by Katarzyna De Lazari-Radek and Peter Singer (2010).
Thanks for the background on esoteric morality!
Yes, I perhaps should have been more clear that “Government House” was not Sidgwick’s term, but a somewhat derogatory term levied against him.
I agree it may be difficult for a utilitarian to fully deceive themselves into giving up their utilitarianism. But here’s an option that might be more feasible: be uncertain about your utilitarianism (you probably already are, or if you aren’t you should be), and act according to a theory that both 1. Utilitarianism recommends you act according to, and 2. You find independently at least somewhat plausible. This could be a traditional moral theory, or it might even be the result of the moral uncertainty calculation itself.
Very interesting perspective and comment in general, thanks for sharing!
“utilitiarians should self-efface their utilitarianism” “Parfit suggests adopting whatever moral system seems to be most likely to produce the highest utility” “you may instead need to forget that you ever believed in utilitarianism”
This sounds plausible: you orient yourself towards the good and backpropagate over time how things play out and then learn through it which system and policies are reliable and truly produce good results (in the context and world you find yourself). This is also exactly what has played out in my own development, by orienting toward what produces good consequences and understanding how uncertain the world is (and how easily I fooled myself by saying I was doing the thing with the best consequences when I didn’t) I came out with virtue ethics myself.
”For a while, I have been thinking of writing a post with many similar themes and maybe I still will at some point.” I would read it with joy and endorse a full post being devoted to this topic (happy to read drafts and provide thoughts)
I’m really glad you wrote this post. Hearing critiques from prominent EAs promotes a valuable community norm of self-reflection and not just accepting EA as is, in my opinion.
A few thoughts:
It’s important to emphasize how much maximization can be normalized in EA subpockets. You touch on this in your post: “And I’m nervous about what I perceive as dynamics in some circles where people seem to “show off” how little moderation they accept—how self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals”). I agree, and I think this is relevant to growing EA Hubs and cause-area silos. If you move to an EA hub that predominantly maximizes along one belief (e.g., AI safety in Berkeley), very natural human tendencies will draw you to also maximize along that belief. Maximizing will win you social approval and dissenting is hard, especially if you’re still young and impressionable (like meee). If you agree with this post’s reasoning, I think you should take active steps to correct for a social bias toward hard-core maximizing (see 2).
If you’re going to maximize along some belief, you should seriously engage with the best arguments for why you’re wrong. Scout mindset baby. Forming true beliefs about a complicated world is hard and motivated reasoning is easy.
Maximizing some things is still pretty cool. I think some readers (of the post and my comment) could come away with a mistaken impression that more moderation in all aspects of EA is always a good thing. I think it’s more nuanced than that: Most people who have done great things in the past have maximized much harder than their peers. I agree one should be cautious of maximizing things we are”conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA.” But maximizing some things that are good across a variety of plausibly true beliefs can be pretty awesome for making progress on your goals (e.g., maximizing early-career success and learning). And even if the extreme of maximization is bad, more maximization might be directionally good, depending on how much you’re currently maximizing. We also live in a world that may end within the next 100 years, so you have permission to be desperate.
Upvoted for the first two points, but the third seems not entirely true to me. It’s good to want to increase some things a lot. But maximizing means taking them as more important than everything else, which isn’t good any more.
Example: I’ve been “maximizing for exploration” lately by trying to find new jobs/projects/environments which I can experience for a short period of time and gain information from. But I’m not actually maximizing, which would look more like taking random actions, or doing projects completely unrelated to anything I’ve done before.
This especially seems like bad advice to me:
Maybe the linked post makes it mean something other than what I understand—but most people aren’t going to read it.
I think I still stand behind the sentiment in (3), I’m just not sure how to best express it.
I agree that 100% (naively) maximizing for something can quickly become counter-productive in practice. It is really hard to know what actions one should take if one is fully maximizing for X, so even if one wants to maximize X it makes sense to take into account things like optics, burnout, Systemic cascading effects, epistemic uncertainty, and whatever else gives you pause before maximizing.
This is the type of considerate maximization I was gesturing at when I said directionally more maximizing might be a good thing for some people (to the extent they genuinely endorse doing the most good), but I recognize that ‘maximizing’ can be understood differently here.
Caveat 1: I think there are lots of things it doesn’t make sense to 100% maximize for, and you shouldn’t tell yourself you are 100% maximizing for them. “Maximizing for exploration” might be such a thing. And even if you were 100% maximizing for exploration, it’s not like you wouldn’t take into account the cost of random actions, venturing into domains you have no context in, and the cost of spending a lot of time thinking about how to best maximize.
Caveat 2: it must be possible to maximize within one of multiple goals. I care a great deal about doing the most good I can, but I also care about feeling alive in this world. I’m lying to myself if I say that something like climbing is only instrumental towards more impact. When I’m working, I’ll maximize for impact (taking into account the uncertainty around how to maximize). When I’m not, I won’t.
[meta note: I have little experience in decision theory, formal consequentialist theory, or whatever else is relevant here, so might be overlooking concepts].
Fair. Nate Soares talks about desperation as a ‘dubious virtue’ in this post, a quality that can “easily turn into a vice if used incorrectly or excessively.” He argues though that you should give yourself permission to go all out for something, at least in theory. And then look around, and see if anything you care about – anything you’re fighting for – is “worthy of a little desperation.”
Mmh, not sure how many of these problems can be solved by “just maximizing correctly”, or, as I like to call it, “better implementing utilitarianism”. E.g., some of the problems you gesture at seem like they could be solved by better evaluation and better expected value calculations.
Arguendo, once our expected value calculations become good enough, we should just switch to them, even if there is some “valley of naïve utilitarianism”.
I don’t think this is true. But let’s say I did—what makes you think our expected value calculations will ever become good enough, and how will you know if they do?
Agree with this: it seems unclear to me that they’ll become good enough in many cases since our reasoning capabilities are fairly limited and the world is really complicated. I think this point is what Eliezer is trying to communicate with the tweet pictured in the post:
In my field (operations research), which is literally all about using optimization to make ‘optimal’ decisions, one way in which we account for issues like these is with robust optimization (RO).
In RO, you account for uncertainty by assuming unknown parameters (e.g. the weights between different possible objectives) lie within some predefined uncertainty set. You then maximize your objective over the worst case values of the uncertainty (ie, maximin optimization). In this way, you protect yourself against being wrong in a bounded worst-case way. Of course, this punts the problem to choosing a good uncertainty set, but I still think the heuristic “prefer to take actions which are good even under some mistaken assumptions” should be used more often.
The difference is, though, that in robust optimization you don’t really predict the future. You create an algorithm that you control, which uses some randomness or noisy estimates, and you prove that after enough time you can expect it to wind up somewhere. This is different than straight up giving an estimate of what a single choice will lead to, in a complex non-convex system where you have no control over anything else.
I wouldn’t quite say that. In any sort of multi-period optimization you must model the consequences of actions and are (at least implicitly) predicting something about the future.
Regardless I was mostly gesturing at the intuition, I agree this doesn’t solve the problem.
Well, this depends on what we are talking about, and what the alternatives are.
So for example, 80k hours’ ranking of careers is pretty great, and people do defer to it a whole lot, but they still make adjustments to it based on personal fit, personal values, etc. And you can imagine that as it gets better and better (e.g., as it gets more years poured into its development), the argument for deferring to it becomes more forceful.
That still doesn’t answer the question − 5 or 10 years from now, how do you know what the correct level of deference should be? And why do you expect it to approach 100% and not be bounded from above by something much smaller?
Yes, there’s this weird class of arguments against consequentialism where people say “but following decision procedure X (e.g. blindly maximising short term consequences) will lead to worse outcomes!”. Yes, and that’s a good consequentialist argument for using a different decision procedure!
Perhaps someone needs to write the “Maximisation is not easy and if you think it is you will probably do it very badly” post.
Would you say that incorporating the “advice” of one of my common sense moral intuitions (e.g. “you should be nice”) can be considered part of a process called “better evaluating the EV”?
There seems to be an important trade-off here, where this is a valuable signal that the person “showing off” is aligned with your values and it’s actually pretty useful to know that (especially since current gradients often push in favor of people who are not aligned paying lip service to EA ideas in order to gain money/status/power for themselves).
The balance of how much we should ask or expect of this category of sacrifice seems like one that we should put lots of time as a community into thinking about, especially when we’re trying to grow quickly and are unusually willing to provide resources to people.
“My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it’s harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it’s the same for going around lying or being an asshole toe people all the time.
In practice, many of the utilitarians/consequentialists don’t see the negative outcomes themselves, or at least sufficiently many of them don’t that things will go to shit pretty quickly. (Relatedly, see the Unilateralists’ Curse, the Epistemic Prisoner’s Dilemma, and pretty much the entire literature of game theory, all those collective action problems...).
In addition to that, it’s important not just that you actually have high integrity but that people believe you do. And people will be rightly hesitant to believe that you do if you are going around saying that the morally correct thing to do is maximize expected utility but don’t worry it’s always and everywhere true that the way to maximize expected utility is to act as if you have high integrity. There are two strategies available, then: Actually have high integrity, which means not being 100% a utilitarian/consequentialist, or carry out an extremely convincing deception campaign to fool people into thinking you have high integrity. I recommend the former & if you attempt the latter, fuck you.
Yeah, utilitarianism also isn’t going to always (or even most of the time, depending on the flavor) be convergent on “pro-social/cooperative behaviors”. I think this is because it’s easy to forget that while utilitarianism does broadly work towards the good of the community, it does so in a way that aggregates individual utility and takes an individual’s experience to be the key building block of morality (as opposed to something like Communitarianism, which centers the good of the community and the sort of behavior you mention as a more base tenet of its practice). How much it will be convergent with these behaviors is certainly up for debate, but so long as the behaviors mentioned above are only useful towards increasing aggregate individual utility, you will have many places where this will diverge. This is perhaps harder to see when you imagine a polar extreme as you mention “lying or being an asshole to people all the time” but I don’t think anyone is worried about that for utilitarianism. More that they might follow down a successive path of deceit or overriding of other people’s interest towards what they see to be the greater good (i.e. “knowing” a friend would be better off if they didn’t have to bear the weight of some bad thing in the world that relates to them that they wouldn’t find out about if you don’t tell them—this seems like the sort of thing utilitarianism might justify but maybe shouldn’t).
I disagree with the following:
“I doubt you can make a case that’s robustly compelling...”
Systemic cascading effects and path dependency might be very coherent consequentialist frameworks & catchphrases to resolve a lot of your epistemic concerns (and this is something I want to explore further).
Naive consequentialism might incentivize you to lie to “do whatever it takes to do good”, but the impacts of lying can cascade and affect the bedrock institutional culture and systems of a movement. On aggregate, these cascading (second-order) effects will make it more difficult for people to trust each other and work together in honest ways, making the moral calculus not worth it.
Furthermore, this might have a path-dependent effect, analogous to a significant/persistent/contingent effect, where choosing this path encodes certain values in the institution and makes it harder for other community values to arise in the future.
This similarly generalizes to most “overoptimization becomes illogical” problems. Naive consequentialism & low-integrity epistemics rarely make sense in the long run anyways, so it’s just a matter of dispelling simplified, naive models of reality and coherently phrasing the importance of epistemics, diversity, and plurality through a consequentialist lens.
″...and is widely agreed upon.”
Still relatively new to the community, so I might have the wrong view on this—but I’m always remarkably surprised by how openly EAs are willing to discuss flaws in the community & are concerned about solid epistemics within the community.
E.g. recently just posted a submission to the EA criticism contest—and it’s difficult for me to imagine any other subgroup which pours $100k into a contest seriously considering and rewarding internal & external criticism about its most fundamental values and community.
There’s another problem with the norm of lying for the greater good: One, it is very easy for biased human minds to convince themselves of the lie and become systematically distorted from their path. To put it in Sarah Constantin’s words:
Another problem is you are much more vulnerable to Goodharting yourself, and eventually you will use it for motivated reasoning, where your pet causes can be lied about, and outsiders can’t tell if the organization is actually doing what it claims. While I think the dentological notion of honesty is too exploitable and naive for the 21st century, I definitely agree with Holden that lying should not be a norm, as well as misleading people should also not be a norm, but a regrettable exception.
This is a really great post Holden; thank you for writing it.
As a somewhat outside observer, it seems a larger number of EAs, including many of those who drive the zeitgeist of this forum, are orienting their entire lives around EA (working directly + EA dominated social life + dating within EA + consumption of media through twitter/podcasts largely consisting of EA curation). I think this is a serious concern for many reasons, but one important one is that I suspect an insular community is more likely to produce behaviours like those described in your post.
I ultimately think this was inevitable due to the way that moralities diverge in the limit, rather than converge, so which morality was chosen has large effects on future actions.
(BTW, this is why I suspect moral realism is not true, that is there are no facts of the matter over what’s good or bad.)
One issue I have with these arguments for pluralism and for sometimes obeying something like common sense morality for its own sake and independent of utilitarian justification is that common sense morality is crazy/impossible to follow in almost all normal decision situations if you think it’s implications through properly.
One argument for this is MacAskill’s argument that deontology required paralysis. Every time you leave the house, you foreseeably cause someone to die by changing the flow of traffic. Cars also contribute to air pollution which foreseeably kills people, suggesting that emitting any amount of pollution is impermissible. This violates nonconsequentialist side-constraints. I don’t understand how you can give some weight to this type of view.
This is not the point that we should follow utilitarian morality when the stakes are high from a utilitarian point of view.
A counterargument is that most people live by (imperfectly) adhering to their commonsense morality, and are usually not paralyzed. So it seems that the paralysis is a feature of a theoretical simplification rather than of the real system.
I’m mostly-deontologist and don’t think the paralysis argument works, but I also don’t think the way most people live is a good counterargument to it. I don’t think so because MacAskill is arguing against a coherent moral worldview, whereas hardly any people live according to a coherent moral worldview. Them not being paralysed is, I think, not because they have a much more refined version of deontology than what MacAskill argues against, but because they don’t have a coherent version of it at all.
I don’t think incoherence is much of a problem.
Edit: I’ll rephrase—I think it’s good to improve our morals and our adherence to them, but achieving a fully coherent moral theory is unrealistic and probably impossible.
Just look around.
Edited to maybe make it clearer what I mean.
Thanks, appreciate it! :)
I agree, and I think this analogy works: a company that sets out with the mission to “maximize shareholder value” probably won’t maximize shareholder value. Most of the world’s most valuable companies (that actually appear to have succeeded in maximizing shareholder value) have (fairly credibly) defined more meaningful missions.
There is an episode in the life of the Buddha where he believes that it would be best for him to eat very little. He does so and:
But it does not work for him:
He concludes:
While it’s not common in my experience for EAs to intentionally deprive themselves of food (at least not nowadays), they do sometimes tend to deprive themselves of time for things other than impact. Perhaps they shouldn’t.
And be wary of being one of those monks.
(From The Dialogue with Prince Bodhi)
Thank you very much for writing this post.
Thanks for writing this, Holden! I agree that potential harms from the naive (mis-)application of maximizing consequentialism is a risk that’s important to bear in mind, and to ward against. It’s an interesting question whether this is best done by (i) raising concerns about maximizing in principle, or (ii) stressing the instrumental reasons why maximizers should be co-operative and pluralistic.
I strongly prefer the latter strategy, myself. It’s something we take care to stress on utilitarianism.net (following the example of historical utilitarians from J.S. Mill to R.M. Hare, who have always urged the importance of wise rules of thumb to temper the risks of miscalculation). A newer move in this vicinity is to bring in moral uncertainty as an additional reason to avoid fanaticism, even if utilitarianism is correct and one could somehow be confident that violating commonsense norms was actually utility-maximizing on this occasion, unlike all the other times that following crude calculations unwittingly leads to disaster. (I’m excited that we have a guest essay in the works by a leading philosopher that will explore the moral uncertainty argument in more detail.)
One reason why I opt for option (ii) is honesty: I really think these principles are right, in principle! We should be careful not to misapply them. But I don’t think that practical point does anything to cast doubt on the principles as a matter of principle. (Others may disagree, of course, which is fine: route (i) might then be an available option for them!)
Another reason to favour (ii) is the risk of otherwise shoring up harmful anti-consequentialist views. I think encouraging more people to think in a more utilitarian way (at least on current margins, for most people—there could always be exceptions, of course) is on average very good. I’ve even argued on this basis that non-consequentialism may be self-effacing.
That said, some sort of loosely utilitarian-leaning meta-pluralism (of the sort Will MacAskill has been endorsing in recent interviews) may well be optimal. (It also seems more reasonable than dogmatic certainty in any one ethical approach.)
First I’ve heard of utilitarian-leaning meta-pluralism! Sounds interesting — have any links?
Will’s conversation with Tyler: “I say I’m not a utilitarian because — though it’s the view I’m most inclined to argue for in seminar rooms because I think it’s most underappreciated by the academy — I think we should have some degree of belief in a variety of moral views and take a compromise between them.”
One larger point to make here is that there may be no one true morality, ie that moral realism is false, and insofar as such a thing is correct, then every morality becomes essentially ideological, and there’s no special claim for anyone, including EA to have any morality as truth. This would suggest very different actions from today.
This is also a reason to be wary of moral progress theories.
Also, this is why the future could be lovecraftian in it’s morality, assuming the AI Alignment project goes well, in both it’s positive and negative senses, since extremely high-end technology like genetic editing and digital people will warp our moral conceptions of personal identity, species, happiness and more.
Finally, I think that honesty as a morality should bend to the risks of information hazards, where it’s better off if society or individuals didn’t know something about it, where the misleading explanation is better than the truth, also known as strategic lies. AI safety is cursed here, but some other fields might have this problem.
Over the long-term for long-termism, extremes, not moderates win in the long-term.
PS: I actually think the politicization of the movement, alongside PR scandals, is plausibly a long-term threat to the movement.
Agree with this post! It’s nice to see these concerns written down.
Deeply agree that “the thing we’re trying to maximize” is itself confusing/mysterious/partially unknown, and there is something slightly ridiculous and worrying about running around trying really hard to maximize it without knowing much about what it is. (Like, we hardly even have the beginnings of a science of conscious experience, yet conscious experience is the thing we’re trying to affect.).
And I don’t think this is just a vague philosophical concern — I really do think that we’re pretty terrible at understanding the experience of many different peoples across time/which combinations of experiences are good/bad and how to actually facilitate various experiences.
People seem to have really overconfident views about what counts as improvement in value, i.e., many people by default seem to think that GDP going up and Our World in Data numbers going up semi-automatically means that that things are largely improving. The real picture might be much more mixed — I think it’d be possible to have those numbers go way up while things simultaneously get worse for large tracts (or even a majority of) people. People are complicated, value is really complex and hard to understand, but often people act as if these things are mostly understood for intents and purposes. I think they’re mostly not understood.
More charitably, the situation could be described as, “we don’t know exactly what we’re trying to maximize, but it’s something in the vicinity of ‘over there’, and it seems like it would be bad if e.g. AI ran amok, since that would be quite likely to destroy whatever it is that actually is important to maximize”. I think this is a fine line of reasoning, but I think it’s really critical to be very consciously alert to the fact that we only have a vague idea of what we’re trying to maximize.
One potential approach could be to choose a maximand which is pluralistic. In other words:
Seek a large vector of seemingly-important things (e.g. include many many detailed aspects of human experience. Could even include things which you might not care about fundamentally but are important instrumental proxies for things that you do care about fundamentally, e.g. civic engagement, the strength of various prosocial norms, …)
Choose a value function over that vector which has a particular kind of shape: it goes way way down if even one or two elements of the vector end up close to zero. I.e., don’t treat things in the vector as substitutable with one another; having 1000x of item A isn’t necessarily enough to make up for having 0 of item B. To give a general idea: something like the product of the sqrts of the items in the vector.
Maintain a ton of uncertainty about what elements should be included in the vector, generally seek to be adding things, and try to run processes that pull deep information in from a TON of different people/perspectives about what should be in the vector. (Stuff like Polis can be kind of a way to do this; could likely be taken much further though. ML can probably help here. Consider applying techniques from next-gen governance — https://forum.effectivealtruism.org/posts/ue9qrxXPLfGxNssvX/cause-exploration-governance-design-and-formation)
Treat the question of “which items should be in the vector?” and “what value function should we run over the vector?” as open questions that need to be continually revisited. Answering that question is a whole-of-society project across all time!
I strongly agree with this post.
Thinking consequentially, in terms of expected value and utility functions, will make you tend to focus on the first-order consequences of your actions and lead to a blind-spot for things that are fuzzy and not easily quantifiable, e.g. having loyal friends or being considered a trustworthy person.
I think that especially in the realm of human relationships the value of virtues such as trust, honesty, loyalty, honor is tremendous—even if these virtues may often imply actions with first-order consequences that have ‘negative expected value’ (e.g. helping a friend clean the kitchen when you could be working on AI alignment).
This is why I try to embrace deontological frameworks and heuristics in day-to-day life and in such things as social relationships, friendships, co-living etc.: Even if the upside of that is hard to quantify, I am convinced that the value of the higher-order consequences of it far outweigh the ‘first-order inconvenience/downside’.
>Maximizing X conceptually means putting everything else aside for X—a terrible idea unless you’re >really sure you have the right X.
Disagree. This sounds like a rejection of expected value. You only “put everything else aside” if you are sure that “everything else” doesn’t matter. If you aren’t sure then put some weight into “everything else”. If you have any conception of better and worse using EV is tautological.
I have no issues with a movement stating it’s normative framework and maximizing based off of that. If I don’t (fully) agree with it’s framework then I won’t (fully) support it. To be clear though EA absolutely isn’t stating its normative framework.
If your ethical framework causes you to act as a “low integrity” or “bad person” and you see these things as very bad then use a different framework. If other people don’t think low integrity is a bad thing then that’s just what they think. If you want to stop them you need to make a deal with them or use force or convince them why that isn’t actually rational under their framework.
>We’d have a bitterly divided community, with clusters having diametrically opposed goals.
I see this as inevitable. If people have different values they can’t live in agreement forever. There is no reason to believe that we all have the same values. We should of course engage in moral trades to fix common ground while there is common ground to be improved. At some point there isn’t though, and deciding not to maximize because you will be fighting with others is just a losers mentality. Whoever does maximize will imprint their values into the world more.
It still depends how sure you are about your own values (sure that you will always endorse them, sure that they are ‘correct’ values, or sure in some other sense)
You can just frame uncertainty about your values as putting less weight on your values relative to all other possible values.
on one end if you have no certainty that you are correct about any of your moral values (whatever correct means to you) you still “maximize”. The difference is that under your framework all states are equally good so maximization requires nothing from you.
That’s why I said what I said. Either op is explicitly rejecting EV or op is basically calling out the community for having poorly thought through values and so it would be higher EV to not totally use them. But it seems simpler to just emphasize thinking things through more, or to recommend we consider that we are over confident about our values being “correct”.
Thanks for writing this. My comment is on how the ideology here overlaps with practical community building efforts.
I think avoiding being extremely hard core in the ways described here have a lot of synergy with making EA a Big Tent movement. Luke argues that most folks who find EA take a gradual approach to becoming engaged and making bigger commitments. EA culture ought to leave room for different levels of commitment in ideology and in personal commitment.
There’s also the possibility that a maximum doesn’t exist.
Suppose you had a one-shot utility machine, where you simply punch in a number, and the machine will generate that many utils then self-destruct. The machine has no limit in the number of utils it can generate. How many utils do you select?
“Maximise utility” has no answer to this, because there is no maximum.
In real life, we have a practically infinite number actions available to us. There might be a sense in which due to field quantisation and finite negentropy there are technically finite actions available, but certainly there are more actions available than we could ever enumerate, let alone estimate the expected utility for.
In practice, it seems like the best way to actually maximise value is just to do lots of experimental utility-generating projects, and greedily look for low-effort, high-reward strategies.
This problem of consequentialism applied to real human problems will always be there as long as what is “good” for others is not defined by those others, for themselves. Its impossible to determine what is or isn’t good for another person, without abstracting their agency away—which means whatever conclusion you come to about what is “good” for them, will always be flawed.
There are a lot of things we can say with certainty are nearly universally understood as “good”—like being able to live, for instance. It means EA isn’t in conflict with the right direction, for most, at this moment in time, because its giving and “good” work is largely focused on these sort of lower level, fundamental human and animal welfare problems. As you progress however, which is all basically theoretical at this point, this is where what is “good” becomes more and more grey and you encounter the problems created by having ignored the agency of those others.
As you get closer to a potential maximum benefit, I’d suggest you’d realize that the only real “good” or benefit an altruist can pursue is the maximization of agency for others. People with agency choose life, they choose better living, they choose personal progress, they can better see suffering, they choose to acknowledge the agency of others and if this sort of agency as good is the motivation of EAs, then you are far less likely to have to deal with fanaticism issues. When was there ever a harmful fanatic who’s MO was “I want to give everyone else all the power”?
The larger problem is that moralities diverge at the tails, not converging. This is why moderation doesn’t win out in long-term, and why different moral systems will view a high-tech future with very different reactions.
(BTW, this is why I suspect moral realism is not true, that is there are no facts of the matter over what’s good or bad.)
Yep, that’s what I said. The further you get from addressing basic human needs problems, the more grey “good” becomes. But its always grey, it just gets more and more grey towards ‘the tails’ as you say. I’m not really a moral realist either.
Another way to state my overall argument is that really the only altruistic thing to do is to make sure everyone has the power to make effectual moral decisions for themselves—the same power you have. This doesn’t exclude addressing basic human needs, btw. It would likely necessitate it.
If you believe in people, which I imagine effective altruists do because what’s the point otherwise, then people with agency that end up matching your level of agency will also end up updating to your moral position anyway, if you’re right.
Not if they inherently care about different things, eg psychopaths who enjoy taking away others’ agency
I should have been more clear I guess: I was talking about the moral imperative of altruists, not a social regulating or political philosophy. But, from the perspective I am arguing here, imbuing that psychopath with agency would negate work to imbue everyone else with agency. I am actually not arguing against utility here, just against determining what is good for others, which only makes sense if you believe that people, as a whole or majority, are good or will choose good—which is sort of a required belief to be an altruist, isn’t it?
Isn’t that just hardcore libertarianism, which some consider to be harmful?
I don’t know of any libertarian philosophy that really considers the importance or moral value of other people’s agency let alone one that actively seeks to enable agency in others in order to do or maximize good. As far as I understand libertarianism, its pretty much just concerned with “self” and the only other-regarding aspect of it is ensuring that others don’t interfere with others, in order to preserve “self”, which makes it not other-regarding at all. There’s certainly little if any altruism involved. I mean, an individual libertarian could pursue altruism, I suppose, but its not a part of the underlying philosophy. I’d actually suggest that altruism, which is a reciprocal behavior, is pretty opposing to libertarian behaviors.
I agree with this concept objectively. As in many things in life, the truth is probably complicated—even if we can simplify it with an analogy. Depth and quality matter as much as quantity.
When you’re looking at “downstream” vs “upstream” solutions (immediate consequences vs root issues), the lens of maximization is going to have a different impact. The efficiency of energy spent, the effort and markers identified and measured; all would be different depending on what you are paying attention to. It’s difficult, but being able to hold space for diverse and even conflicting opinions (within the mindset of “yes &”) is something I see a lot of beuracracies struggle to do. In IFS (a therapy that treats aspects our inner selves as “parts”); finding ways to validate concerns can ultimately lead to better consensus and creative ways of resovling tension—if you apply this concept to an org, arguing internally will maintain tension whereas vaildation may lead to precious refinements/insights that shift the goal slightly.
In short: Layers of mindfulness, openness to conflicting perspectives, and discussion about the positive impact qualities may help embody “good maximization” without becoming too narrow minded or solution jumping.
“Certainty is the antidote to learning”—High Performance Habits
That came to my mind when reading this post (which I really liked (so much that I don’t have much else to add:)).