Review: Good Strategy, Bad Strategy

Link post

I used to think that all generic strategy advice was pointless. After all, the point of a strategy is to achieve a thing, and to achieve a thing you just think hard about how to best do it and then work hard to do it. I said this to my friend Dewi, who said that this is mostly true, but there is an exception: Good Strategy, Bad Strategy by Richard Rumelt. Dewi was right.

The book has some principles. In particular: a good strategy should include a diagnosis of the problem, an overall guiding policy, and a set of coherent actions. A laundry list of actions, a goal, or a vague idea of which direction to move in are not strategies.

But most of the book’s value is reading a bunch of examples and soaking up the thinking style embedded in them. Therefore, this review is mostly a series of vignettes taken directly from Rumelt’s text, that hopefully 8020 the value of the book (or is a helpful reminder if you’ve read it). Rumelt’s vignettes are great.

I don’t include anything from Rumelt’s lengthy attack on most of what passes for “strategy consulting”. I also won’t mention the part where he spends most of a chapter giving a pop-sci history of physics from Galileo to dark matter in order to segue into a point about Starbucks’ business strategy—he’s an emeritus professor, so presumably he can just do this.

Example

“In 1805, England had a problem. Napoléon had conquered big chunks of Europe and planned the invasion of England. But to cross the Channel, he needed to wrest control of the sea away from the English. Off the southwest coast of Spain, the French and Spanish combined fleet of thirty-three ships met the smaller British fleet of twenty-seven ships. The well-developed tactics of the day were for the two opposing fleets to each stay in line, firing broadsides at each other. But British admiral Lord Nelson had a strategic insight. He broke the British fleet into two columns and drove them at the Franco-Spanish fleet, hitting their line perpendicularly. The lead British ships took a great risk, but Nelson judged that the less-trained Franco-Spanish gunners would not be able to compensate for the heavy swell that day. At the end of the Battle of Trafalgar, the French and Spanish lost twenty-two ships, two-thirds of their fleet. The British lost none. Nelson was mortally wounded, becoming, in death, Britain’s greatest naval hero. Britain’s naval dominance was ensured and remained unsurpassed for a century and a half.

Nelson’s challenge was that he was outnumbered. His strategy was to risk his lead ships in order to break the coherence of his enemy’s fleet. With coherence lost, he judged, the more experienced English captains would come out on top in the ensuing melee. Good strategy almost always looks this simple and obvious and does not take a thick deck of PowerPoint slides to explain. It does not pop out of some “strategic management” tool, matrix, chart, triangle, or fill-in-the-blanks scheme. Instead, a talented leader identifies the one or two critical issues in the situation—the pivot points that can multiply the effectiveness of effort—and then focuses and concentrates action and resources on them.”

The basics

The two fundamental things

  1. There’s great advantage in having any coherent strategy in the first place at all; most orgs don’t have one; people will be actively surprised if you have any coherent strategy at all

  2. Subtle shifts of viewpoint can lead to realising new strengths; reframe the situation so you see the leverage point

Coherence

Example: the obvious

[...] I carried out interviews with twenty-six executives, all division managers or CEOs in the electronics and telecommunications sector. My interview plan was simple: I asked each executive to identify the leading competitor in their business. I asked how that company had become the leader—evoking their private theories about what works. And then I asked them what their own company’s current strategy was. These executives, by and large, had no trouble describing the strategy of the leader in their sectors. The standard story was that some change in demand or technology had appeared—a “window of opportunity” had opened—and the current leader had been the first one to leap through that window and take advantage of it. Not necessarily the first mover, but the first to get it right.

But when I asked about their own companies’ strategies, there was a very different kind of response. Instead of pointing to the next window of opportunity, or even mentioning its possibility, I heard a lot of look-busy doorknob polishing. They were making alliances, they were doing 360-degree feedback, they were looking for foreign markets, they were setting challenging strategic goals, they were moving software into firmware, they were enabling Internet updates of firmware, and so on.”

Example: Jobs waits

In the late 1990s:

[Rumelt said:] “Steve, this turnaround at Apple has been impressive. But everything we know about the PC business says that Apple cannot really push beyond a small niche position. The network effects are just too strong to upset the Wintel standard. So what are you trying to do in the longer term? What is the strategy?”

He did not attack my argument. He didn’t agree with it, either. He just smiled and said, “I am going to wait for the next big thing.”

Strategy is Unexpected

The Department of the Army publishes field manuals fully describing its basic doctrines and methods. FM 100-5, published in 1986, was titled Operations and was described as “the Army’s keystone warfighting manual.” Part 2 of FM 100-5 was dedicated to “Offensive Operations,” and on page 101 it described “envelopment” as the most important form of offensive maneuver—the U.S. Army’s “Plan A.” The manual said:

Envelopment avoids the enemy’s front, where its forces are most protected and his fires most easily concentrated. Instead, while fixing the defender’s attention forward by supporting or diversionary attacks, the attacker maneuvers his main effort around or over the enemy’s defenses to strike at his flanks and rear.”

But when the US army in the First Iraq War followed its own advice, and avoided a frontal attack on the Iraqi forced in Kuwait, everyone was shocked and called it brilliant strategy.

A lot of this was just preventing everyone else from pursuing their own special agenda, in order to get them to act coherently towards a shared goal:

In the case of Desert Storm, the focus was much more than an intellectual step. Schwarzkopf had to suppress the ambitions and desires of the air force, marines, various army units, each coalition partner, and the political leadership in Washington. For example, the U.S. Army’s best light infantry—the Eighty-Second Airborne—was tasked with providing support to French armor and infantry, an assignment its leadership protested. Eight thousand U.S. Marines waited on ships to land on the beaches of Kuwait City, but did not. It was a diversion. Air force commanders wanted to demonstrate the value of strategic bombing—they believed that the war could be won by air attacks on Baghdad—and had to be forced against strenuous protest to divert their resources to fully support the land offensive. Secretary of Defense Dick Cheney wanted the mission accomplished with a smaller force and detailed an alternative plan of attack. Prince Khalid, commanding the Saudi forces in the coalition, insisted that King Fahd be involved in the planning, but Schwarzkopf convinced President Bush to ensure that U.S. Central Command retained control over strategy and planning.

Advantage

Rumelt quotes Andy Marshall, director of the Office of Net Assessment (which thinks about the US defence situation), complaining about the annual budgeting process of the defence department:

“This process of justifying expenditures as counters to Soviet expenditures conditioned U.S. actions on Soviet strengths, expressed as threats, not on Soviet weaknesses and constraints. We had a war strategy—a catastrophic spasm—but no plan about how to compete with the Soviet Union over the long term.”

Marshall and some others had written “Strategy for Competing with the Soviets in the Military Sector of the Continuing Political-Military Competition” in 1976:

This fascinating analysis of the situation worked to redefine “defense” in new terms—a subtle shift in point of view. It argued that “in dealing effectively with the other side, a nation seeks opportunities to use one or more distinctive competences in such a way as to develop competitive advantage—both in specific areas and overall.” It then went on to explain that the crucial area of competition was technology because the United States had more resources and better capabilities in that area. And, most important, it argued that having a true competitive strategy meant engaging in actions that imposed exorbitant costs on the other side. In particular, it recommended investing in technologies that were expensive to counter and where the counters did not add to Soviet offensive capabilities. For instance, increasing the accuracy of missiles or the quietness of submarines forced the Soviet Union to spend scarce resources on counters without increasing the threat to the United States. Investments in systems that made Soviet systems obsolete would also force them to spend, as would selectively advertising dramatic new technologies.

It seems this actually happened—c.f. Ronald Reagan’s Star Wars program.

Rumelt comments:

There were no complex charts or graphs, no abstruse formulas, no acronym-jammed buzz speak: just an idea and some pointers to how it might be used—the terrible simplicity of the discovery of hidden power in a situation.

Marshall and Roche’s analysis included a list of U.S. and Soviet strengths and weaknesses. Such lists were not new, and the traditional response to them would have been to invest more to tip the “balance” in one’s favor. But Marshall and Roche, like Sam Walton, had an insight that, when acted upon, provided a much more effective way to compete—the discovery of hidden power in the situation.

The kernel of good strategy

The kernel of a strategy contains three elements:

  1. A diagnosis that defines or explains the nature of the challenge. A good diagnosis simplifies the often overwhelming complexity of reality by identifying certain aspects of the situation as critical.

  2. A guiding policy for dealing with the challenge. This is an overall approach chosen to cope with or overcome the obstacles identified in the diagnosis.

  3. A set of coherent actions that are designed to carry out the guiding policy. These are steps that are coordinated with one another to work together in accomplishing the guiding policy.”

If there is one specific and explicit piece of knowledge to take from this book, it’s this.

Diagnosis

Example: US Cold War policy

A diagnosis is generally denoted by metaphor, analogy, or reference to a diagnosis or framework that has already gained acceptance. For example, every student of U.S. national strategy knows about the diagnosis associated with the Cold War guiding policy of containment. This concept originated with George Kennan’s famous “long telegram” of 1946. Having served as an American diplomat in the USSR for more than a decade, and having seen Soviet terror and politics at close hand, he carefully analyzed the nature of Soviet ideology and power. Kennan started with the observation that the Soviet Union was not an ordinary nation-state. Its leaders defined their mission as opposition to capitalism and as spreading the gospel of revolutionary communism by whatever means necessary. He stressed that antagonism between communism and capitalist societies was a central foundation of Stalin’s political regime, preventing any sincere accommodation or honest international agreements. However, he also pointed out that the Soviet leaders were realists about power. Therefore, he recommended a guiding policy of vigilant counterforce:

“In the light of the above, it will be clearly seen that the Soviet pressure against the free institutions of the western world is something that can be contained by the adroit and vigilant application of counter-force at a series of constantly shifting geographical and political points, corresponding to the shifts and maneuvers of Soviet policy, but which cannot be charmed or talked out of existence. The Russians look forward to a duel of infinite duration, and they see that already they have scored great successes.”

Kennan’s diagnosis for the situation—a long-term struggle without the possibility of a negotiated settlement—was widely adopted within policy-making circles in the United States. His guiding policy of containment was especially attractive as it specified a broad domain of action—the USSR was, metaphorically speaking, infected by a virus. The United States would have to keep the virus from spreading until it finally died out. Kennan’s policy is sometimes called a strategy, but it lacked the element of action. All presidents from Truman through George H. W. Bush struggled with the problem of turning this guiding policy into actionable objectives. Over time, the guiding policy of containment led to NATO and SEATO, the Berlin Airlift, the Korean War, placing missiles in Europe, the Vietnam War, and other Cold War actions.

The power of Kennan’s diagnosis can be seen by considering how history might have been different if the situation had been framed another way in 1947. Perhaps the Soviet Union could have been enticed into the world community through a policy of engagement by including it in the Marshall Plan. Or perhaps it wasn’t an American problem at all, but an issue for the United Nations. Or perhaps the Soviet Union was a tyranny rivaling Nazi Germany, and the United States should have sought to actively oppose it, undermine it, and liberate its population.

Example: IBM

[W]hen Lou Gerstner took over the helm at IBM in 1993, the company was in serious decline. Its historically successful strategy had been organized around offering complete, integrated, turnkey end-to-end computing solutions to corporations and government agencies. However, the advent of the microprocessor changed all that. The computer industry began to fragment, with separate firms offering chips, memory, hard disks, keyboards, software, monitors, operating systems, and so on. [...] As computing moved to the desktop, and as IBM’s desktop offering became commoditized by clone competitors and the Windows-Intel standard, what should the company do? The dominant view at the company and among Wall Street analysts was that IBM was too integrated. The new industry structure was fragmented and, it was argued, IBM should be broken up and fragmented to match. As Gerstner arrived, preparations were under way for separate stock offerings for various pieces of IBM.

After studying the situation, Gerstner changed the diagnosis. He believed that in an increasingly fragmented industry, IBM was the one company that had expertise in all areas. Its problem was not that it was integrated but that it was failing to use the integrated skills it possessed. IBM, he declared, needed to become more integrated—but this time around customer solutions rather than hardware platforms. The primary obstacle was the lack of internal coordination and agility. Given this new diagnosis, the guiding policy became to exploit the fact that IBM was different, in fact, unique. IBM would offer customers tailored solutions to their information-processing problems, leveraging its brand name and broad expertise, but willing to use outside hardware and software as required. Put simply, its primary value-added activity would shift from systems engineering to IT consulting, from hardware to software. Neither the “integration is obsolete” nor the “knowing all aspects of IT is our unique ability” viewpoints are, by themselves, strategies. But these diagnoses take the leader, and all who follow, in very different directions.

Guiding policy

The guiding policy outlines an overall approach for overcoming the obstacles highlighted by the diagnosis. It is “guiding” because it channels action in certain directions without defining exactly what shall be done. Kennan’s containment and Gerstner’s drawing on all of IBM’s resources to solve customers’ problems are examples of guiding policies. Like the guardrails on a highway, the guiding policy directs and constrains action without fully defining its content.

[...]

A guiding policy creates advantage by anticipating the actions and reactions of others, by reducing the complexity and ambiguity in the situation, by exploiting the leverage inherent in concentrating effort on a pivotal or decisive aspect of the situation, and by creating policies and actions that are coherent, each building on the other rather than canceling one another out.

Example: corner grocery store

To look more closely at how a guiding policy works, follow the thinking of Stephanie, a friend who owns a corner grocery store. She does the accounts, manages personnel, sometimes runs the cash register, and makes all the decisions. Several years ago, Stephanie told me about some of the issues she was facing. She was considering whether she should keep prices down or offer more expensive, fresh organic produce. Should she begin to stock more Asian staples for the many Asian students who lived in the area? Should the store be open longer hours? How important was it to have a helpful, friendly staff that gets to know the regulars? Would adding a second checkout stand pay off? What about parking in the alley? Should she advertise in the local college newspaper? Should she paint the ceiling green or white? Should she put some items on sale each week? Which ones?

[digs about how economists aren’t helpful for this problem]

Thinking about her store, Stephanie diagnosed her challenge to be competition with the local supermarket. She needed to draw customers away from a store that was open 247 and had lower prices. Seeking a way forward, she believed that most of her customers were people who walked by the store almost every day. They worked or lived nearby. Scanning her list of questions and alternatives, she determined that there was a choice between serving the more price-conscious students or the more time-sensitive professionals. Transcending thousands of individual choices and instead framing the problem in terms of choosing among a few customer groups provided a dramatic reduction in complexity.

Of course, if both of these customer segments could be served with the same policies and actions, then the dichotomy would have been useless and should be cast aside. In Stephanie’s case, the difference seemed significant. More of her customers were students, but the professionals who stopped in made much larger purchases. Pushing further along, Stephanie began to explore the guiding policy of “serve the busy professional.” After some more tinkering, Stephanie sharpened the guiding policy a bit more, deciding to target “the busy professional who has little time to cook.

There was no way to establish that this particular guiding policy was the only good one, or the best one. But, absent a good guiding policy, there is no principle of action to follow. Without a guiding policy, Stephanie’s actions and resource allocations would probably be inconsistent and incoherent, fighting with one another and canceling one another out. Importantly, adopting this guiding policy helped reveal and organize the interactions among the many possible actions. Considering the needs of the busy professional with little time to cook, she could see that the second checkout stand would help handle the burst of traffic at 5 p.m. So would more parking in the alley. In addition, she felt she could take space currently used for selling munchies to students and offer prepared high-quality take-home foods instead. Professionals, unlike students, would not come shopping at midnight, so there was no need for very late hours. The busy professionals would appreciate adequate staffing after work and, perhaps, at lunchtime. Having a guiding policy helped create actions that were coordinated and concentrated, focusing her efforts.

Coherent action

Action requires doing something

Strategy is about action, about doing something. The kernel of a strategy must contain action.

Rumelt gives an example of a consumer goods producer that was running a “Pan-European” initiative, to try to achieve economies of scale in both production and marketing.

The heads of the country-based organizations were placed on a Pan-Europe Executive Committee, which met once a quarter. Developers from Germany and the United Kingdom were rotated between the two locations. A New Products group had been created to consult with all departments on opportunities for Pan-European concepts and brands. Part of each executive’s evaluation for promotion was based on his or her contribution to the Pan-European initiative. Despite these measures, nothing much had happened. The German and British developers each claimed that their initiatives were unsupported by the other. The one British-German joint initiative had not been picked up by the rest of the organization.

[...]

“Suppose,” I said, “that this was really important, really top-priority critical. Suppose you absolutely had to get some Pan-European products developed and marketed in the next eighteen months or everything would collapse. What would you do then?”

“For one thing,” he said, throwing his arms up in mock surrender, “I would close one of the development groups. They spend more time bickering than developing.”

Then he thought for a moment and said, “I just might close both and start over in the Netherlands. There is a market-test office there we could use as a seed. We could take some of the best people from the UK and Germany and start fresh. Still, that doesn’t solve the problem of getting the country managers on board.”

“And the country managers’ lack of enthusiasm is because … ?” I asked.

“Well, each country manager has spent years understanding the special conditions in a country, tailoring products and marketing programs to that country’s local conditions. They don’t trust the Pan-European idea. The French don’t want to waste marketing efforts on products they see as ‘too British’ or ‘too German.’ And there really has not yet been a compelling Pan-European product that all could get behind. If it were already a success in three or four countries, the rest would get behind it. But everyone has their current portfolio of products to worry about.”

“Right,” I said. “Their jobs are running the present country-based system. And you want new Pan-European initiatives. Now, you can use a shoe to hammer a nail, but it will take a long time. Don’t you need a different tool for this task? If it were really important to get this done, I think you know how you would do it.”

“Of course,” he said. “We could have a single group develop, roll out, and market Pan-European products and take full profit responsibility.”

“At the same time,” I added, “you would have to intervene in the country-based system with special budget overrides for this initiative, promotions for people who help it along, and career problems for people who don’t.”

We moved back to the center of the office, and he sat at his desk, a position of authority. He looked at me and said, “That would be a very painful road. Many noses would get out of joint. It would be better to win people over to this point of view rather than force them over.”

“Right,” I said. “You would only take all those painful steps if it were really important to get action on this concept. Only if it were really important.”

It took another nine months for him to decide that the Pan-European initiative was indeed important and move to reorganize European operations. There was no magical solution to his problem of wanting strong country-based marketing, Pan-European initiatives, and no noses out of joint, all at the same time. As long as strategy remained at the level of intent and concept, the conflicts among various values and between the organization and the initiative remained tolerable. It was the imperative of action that forced a decision about which issue was actually the most important.

[...] Here, as in so many situations, the required actions were not mysterious. The impediment was the hope that the pain of those actions could, somehow, be avoided. Indeed, we always hope that a brilliant insight or very clever design will allow us to accomplish several apparently conflicting objectives with a single stroke, and occasionally we are vouchsafed this kind of deliverance. Nevertheless, strategy is primarily about deciding what is truly important and focusing resources and action on that objective. It is a hard discipline because focusing on one thing slights another.

Coherence

In 2003, I worked with a company whose initial “strategy” was to (1) close a plant in Akron and open a new plant in Mexico, (2) spend more on advertising, and (3) initiate a 360-degree feedback program. Now these actions may all have been “good ideas, but they did not complement one another. They are “strategic” only in the sense that each probably requires the approval of top management. My view is that doing these things might be sound operational management, but it did not constitute a strategy. A strategy coordinates action to address a specific challenge. It is not defined by the pay grade of the person authorizing the action.

The idea that coordination, by itself, can be a source of advantage is a very deep principle. It is often underappreciated because people tend to think of coordination in terms of continuing mutual adjustments among agents. Strategic coordination, or coherence, is not ad hoc mutual adjustment. It is coherence imposed on a system by policy and design.

Strategy is visible as coordinated action imposed on a system. When I say strategy is “imposed,” I mean just that. It is an exercise in centralized power, used to overcome the natural workings of a system. This coordination is unnatural in the sense that it would not occur without the hand of strategy.

[insert standard warning of the dangers of top-down totalitaranism]

But decentralized decision making cannot do everything. In particular, it may fail when either the costs or benefits of actions are not borne by the decentralized actors. The split between the costs and benefits may occur across organizational units or between the present and the future. And decentralized coordination is difficult when benefits accrue only if decisions are properly coordinated. Of course, centrally designed policies can also fail if the decision makers are foolish, in the pay of special interest groups, or simply choose incorrectly.

As a simple example, salespeople love to please customers with rush orders, and manufacturing people prefer long uninterrupted production runs. But you cannot have long production runs and handle unexpected rush orders all at the same time. It takes policies devised to benefit the whole to sort out this conflict.

[...]

[...W]e should seek coordinated policies only when the gains are very large. There will be costs to demanding coordination, because it will ride roughshod over economies of specialization and more nuanced local responses. The brilliance of good organization is not in making sure that everything is connected to everything else. Down that road lies a frozen maladaptive stasis. Good strategy and good organization lie in specializing on the right activities and imposing only the essential amount of coordination.

Rumelt also describes Hannibal’s defeat of the Roman army at Cannae. A key part was to get the Gauls and Spaniards at the centre of the army to feign a retreat. Feigning a retreat would’ve seemed dishonourable and dangerous to them. If Hannibal couldn’t have persuaded them to do that, the strategy wouldn’t have worked.

Sources of advantage

This is a weaker section of the book than the previous. Rumelt admits as much, saying that the core of strategy is the above basics, and depending on field and industry the specifics are very different (c.f. my original point that I started this post with). But still, below are some examples. The chapter on taking advantage of dynamics (i.e. changes in an industry) is especially good; see below.

  • Leverage (but Rumelt could’ve called this point “neglectedness”)

    • Harold Williams was put in charge of the oil billionaire Getty’s fortune. The mandate was to spend it on art. He could’ve spent the billions on buying a huge art collection, like others in his position did. But instead he decided to create a digital catalog of art, which became Getty Images, and actually had an effect on the world.

  • Proximate goals

    • JFK’s goal of landing on the moon is a masterpiece of goal-setting. It was based on a memo by Werner von Braun, that argued in particular that it was far-enough in the future that the US could catch up to and beat the Soviets.

    • Also in the 1960s, NASA’s JPL was planning the unmanned Surveyor lunar lander. The biggest issue was that they didn’t know what the moon’s surface would be like. Project lead Phyllis Buwalda decided to assume that the moon was basically like the a Southwest US desert, and base all design decisions on that unproven assumption. Testing became easy—go to the Southwest. And if the assumption was very wrong (e.g. the moon’s surface was spiky boulders or soft dust that anything sinks into)? Well, then they were screwed anyway.

  • Chain-link systems (if even one thing breaks or is low-quality, it brings down the entire thing).

    • Rumelt gives an example of an overhaul at a machinery company, where many things were broken, and the general manager conducted three sequential campaigns to fix production quality, then sales, then cost. Doing it in sequence, it is implied, was necessary for the required focus.

    • IKEA is very hard to compete with because you need all of the design, the catalog, the self-assembly principle, and many spacious buildings. A traditional furniture retailer can’t just add a catalog, because they’ll still fall short of IKEA’s combination.

  • Extending advantage.

    • In 2008, Rumelt talked with the president of Disney about strategy. The Disney brand was the obviously the most valuable thing the company had, so they couldn’t afford to dilute it. But they wanted to expand beyond children’s movies. So the Disney president set up three principles: no cursing, no sex, no gratuitous violence. Then they went ahead and released Pirates of the Caribbean, acquired Star Wars, etc.

Dynamics

When change occurs, most people focus on the main effects—the spurts in growth of new types of products and the falling demand for others. You must dig beneath this surface reality to understand the forces underlying the main effect and develop a point of view about the second-order and derivative changes that have been set into motion. For example, when television appeared in the 1950s it was clear that everyone would eventually have one and that “free” TV entertainment would provide strong competition to motion pictures. A more subtle effect arose because the movie industry could no longer lure audiences out of their homes with “just another Western.” Traditional Hollywood studios had been specialized around producing a steady stream of B-grade movies and did not easily adapt. By the early 1960s, movie attendance was shrinking rapidly. What revived Hollywood film was a shift to independent production, with studios acting as financiers and distributors. Independent producers, freed from the nepotism and routines of the traditional studio, could focus on assembling a handpicked team to make a film that might be good enough to pull an audience off of their family-room sofas. Thus, a second-order effect of television was the rise of independent film production.

In a discussion with a group of managers at Qualcomm, a San Diego maker of mobile phone chips, I reviewed Moore’s point about the escalating costs of designing more and more complex special-purpose chips. One manager was puzzled and asked if it wasn’t also expensive to create software. He went on to rhetorically ask “Are software engineers less expensive than hardware engineers?”

[...]

We quickly developed an answer that has since stood up to scrutiny by a number of other technical groups: Good hardware and software engineers are both expensive. The big difference lies in the cost of prototyping, upgrading, and, especially, the cost of fixing a mistake. Design always involves a certain amount of trial and error, and hardware trials and errors are much more costly. If a hardware design doesn’t work correctly, it can mean months of expensive redesign. If software doesn’t work, a software engineer fixes the problem by typing new instructions into a file, recompiling, and trying again in a few minutes or a few days. And software can be quickly fixed and upgraded even after the product has shipped.

“I was puzzled over the cause of the computer industry’s deconstruction. Then, about a year later, the reason snapped into focus. I was interviewing technical managers at a client firm and one said that he had formerly been a systems engineer at IBM. He then explained that he had lost that job because modern computers didn’t need much systems engineering. “Why not?” I asked without thinking.

“Because now the individual components are all smart,” he answered. Then I saw it.

[...]

In many traditional computers, and early personal computers, the CPU—the active heart of the machine—did almost everything itself. It scanned the keyboard, looking for a keystroke. When it sensed one, it analyzed the row and column of the keystroke on the keyboard to determine the letter or number that had been pressed. [...] In some cases, designers created custom mini-CPUs to manage these peripherals, but the integration among these devices remained complex and unstandardized and consumed a great deal of systems engineering effort.

After the arrival of cheap microprocessors, all that changed. The modern keyboard has a small microprocessor built into it. It knows when a key has been hit and sends a simple standardized message to the computer saying, in effect, “The letter X was pressed.” The hard-disk drive is also smart so that the CPU doesn’t need to know how it works. It simply sends a message to the hard disk saying “Get sector 2032,” and the hard-disk subsystem returns the data in that sector, all in one slug.

[...]

With the glue of proprietary systems engineering no longer so important, the industry deconstructed itself. Modules did not have to be custom designed to work with every other part. To get a working system, customers did not have to buy everything from a single supplier. Specialist firms began to appear that made and sold only memory. Others made and sold only hard drives or keyboards or displays. Still others made and sold video cards or game controllers or other devices.

General points:

  • Rising fixed costs often lead to industry consolidation.

  • One of the biggest opportunities created by deregulation is often that the inertia of existing players means they will keep playing to the old rules even when they’re gone.

  • Predictable biases (such as not realising that demand for a new durable product will likely go up and then down once everyone has one; or always forecasting that change will result in a “battle of the titans”, whereas sometimes the incumbents are all defeated)

  • Understand how incumbents will react to change.

  • Figure out what the attractor state of the industry is given the tech & demand forces.

Other random vignettes I enjoyed

Incentives aren’t enough, improvement is possible

First, management may mistakenly believe that improvement is a “natural” process or that it can be accomplished by pressure or incentives alone. As Frank Gilbreth pointed out in 1909, bricklayers had been laying bricks for thousands of years with essentially no improvement in tools and technique.5 By carefully studying the process, Gilbreth was able to more than double productivity without increasing anyone’s workload. By moving the supply pallets of bricks and mortar to chest height, hundreds or thousands of separate lifting movements per day by each bricklayer were avoided. By using a movable scaffold, skilled masons did not have to waste time carrying bricks up ladders. By making sure that mortar was the right consistency, masons could set and level a brick with a simple press of the hand instead of the time-honored multiple taps with a trowel. Gilbreth’s lesson, still fresh today, is that incentives alone are not enough. One must reexamine each aspect of product and process, casting aside the comfortable assumption that everyone knows what they are doing.

Almost everyone picks the first idea that comes to mind

When you prepared for this class, each of you read the same material. But some focused on one issue and others on another. Some focused on manufacturing, some on software, some on cable TV provider relationships, and so on. When it came to recommending a course of action, almost everyone chose the first one they thought of.

This is predictable. Most people, most of the time, solve problems by grabbing the first solution that pops into their heads—the first insight. In a large number of situations this is reasonable. It is the efficient way to get through life. We simply don’t have the time, energy, or mental space to do a full and complete analysis of every issue we face.

[...]

“Facing a complex situation like this makes most people uncomfortable. The more seriously you take it, the more you will see it as a real and difficult challenge that requires a coherent response. And that realization will, in turn, make you even more uncomfortable. It is so ill-structured. There are too many variables, so many factors, too many unknowns, no clear list of potential actions and no way to make clear links between actions and outcomes. You are not even sure what the problem is. Like a swimmer dropped into very choppy waters, it is hard to get your bearings. Under pressure to develop a way out of the difficulty, that first idea is a welcome relief. Thank goodness, here is something to hang on to! It feels much better to get oriented.

The problem is that there might be better ideas out there, just beyond the edge of our vision. But we accept early closure because letting go of a judgment is painful and disconcerting. To search for a new insight, one would have to put aside the comfort of being oriented and once again cast around in choppy waters for a new source of stability. There is the fear of coming up empty-handed. Plus, it is unnatural, even painful, to question our own ideas.

Thus, when we do come up with an idea, we tend to spend most of our effort justifying it rather than questioning it. That seems to be human nature, even in experienced executives. To put it simply, our minds dodge the painful work of questioning and letting go of our first early judgments, and we are not conscious of the dodge.

Panel of experts

Trying to destroy your own ideas is not easy or pleasant. It takes mental toughness to pick apart one’s own insights. In my own case, I rely on outside help—I invoke a virtual panel of experts that I carry around in my mind. This panel of experts is a collection of people whose judgments I value. I use an internal mental dialogue with them to both critique my own ideas and stimulate new ones. I try to do this before putting my ideas before others.

The panel of experts trick works because we are adept at recognizing and comprehending well-integrated human personalities. Thinking through how a particular well-remembered expert might respond to a problem can be a richer source of criticism and advice than abstract theories or frameworks.

Carnegie and the strategy consultant

It was 1890, and there was a cocktail party here in Pittsburgh. All the movers and shakers were there, including Carnegie. He held court in a corner of the room, smoking a cigar. He was introduced to Frederick Taylor, the man who was becoming famous as an expert on organizing work.

“Young man,” said Carnegie, squinting dubiously at the consultant, “if you can tell me something about management that is worth hearing, I will send you a check for ten thousand dollars.”

Now, ten thousand dollars was a great deal of money in 1890. Conversation stopped as the people nearby turned to hear what Taylor would say.

“Mr. Carnegie,” Taylor said, “I would advise you to make a list of the ten most important things you can do. And then, start doing number one.”

And, the story goes, a week later Taylor received a check for ten thousand dollars.”

Crossposted from LessWrong (30 points, 0 comments)