The Innovator’s Victory
Homesteaders came next, swarming onto land once considered unsuitable for crops because it averaged less than twenty inches of rain a year. Unscrupulous promoters promised that the very act of farming would increase the precipitation. “Rain follows the plow,” they said. ...
Special excursion trains brought prospective buyers to the region by the thousands. Salesmen assured them that none other than the former Chancellor of The University of Kansas had determined that the climate was undergoing a permanent shift: Precipitation was increasing while the winds were slowing down. Another expert declared that removing the cover of prairie grasses allowed more rainfall to penetrate the soil. ...
[Donald Worster:] “The Great Plow-Up had going for it ample rainfall for a period of ten or fifteen years. And it just kept encouraging more and more— people thought, as they think again and again, that we’ve turned the corner on climate here. We can see far into the future, and it’s all wet. They knew that there’d been severe droughts in the 1890s, but people forgot about those. They thought, that was the past, this is now, and this’ll be the future.”
—Ken Burns: The Dust Bowl
ANALYSTS and CEOs who believe that the fifteen-year dominance of tech by Microsoft and its OEMs was the normal state of affairs — and thus likely to repeat ad infinitum — are like wheat farmers in the Dust Bowl. One failed crop of products after another, one choking storm of losses and layoffs after another, can’t convince them that the golden age of Wintel won’t reappear next year. For people who are in one way or another invested in the success of Microsoft, its dominance of computing throughout the ’80s and ’90s cannot be perceived as a bizarre, fortuitous anomaly, a stroke of incredibly good luck for Gates and company, but instead must be portrayed as in some way the natural order of things.
It’s not. The success of Windows was an anomaly, and a pretty freaky one at that.
Home Computing
Personal computing (then “home computing”) first blossomed in the late ’70s. Several platforms quickly emerged, but they were all tinkerer/hobbyist affairs. None of them had mainstream appeal as something that the average, non-technical non-nerd would want to use. Most buyers of home computers expected to program them. And so it didn’t matter much if there wasn’t a big software ecosystem for any particular platform. Buyers shopped price or technical features, and many owned more than one platform. Prices were low, in the consumer range, and the hardware was correspondingly cheap (but still revolutionary for its time).
Business
So at the beginning of the ’80s, there was no substantial market for home computer software, and what market there was, was not dominated by any market-majority platform. Apple was perhaps closest to having such a position, but still wasn’t there yet.
Into this majority-player vacuum stepped the then-colossus of corporate computing, IBM. Throughout the ’60s and ’70s, IBM had developed a reputation for brutally crushing competitors via underhanded tactics (that got it in a lot of trouble with the government but never to an extent that made the company regret having used them). By the dawn of the ’80s, most businesspeople were acutely attuned to the FUD (fear, uncertainty, and doubt) factor, which whispered, “you’ll never get fired for buying IBM.” The question of who was making the better product became secondary compared to the question of who would still exist five years down the road.
IBM took full (and, in hindsight, their last) advantage of this FUD by marching into the microcomputer market with their IBM PC. Smartly, they designed the device to appeal to business users: a pricey, tank-like, metal box with a high-persistence green-screen that delivered a text-only display, but in a very crisp, 80-column, upper-and-lowercase font, with which the user ran the computer using the mainframe-like MS-DOS. The keyboard was heavy and very clicky, almost reminiscent of a typewriter.
Business users loved it, and looked on all other microcomputing platforms as cheap toys for home hobbyists. The appeal to business users won the day — their high-volume use couldn’t help but spill over into the home market, relentlessly relegating Apple and the other home computer makers to fend for the low end of that market. (Apple was perhaps worst-positioned for this storm, as its computer was aimed higher in the market than those of Commodore, Tandy, etc., and thus was most harmed by IBM’s complete takeover of the high end.) Swiftly, the IBM PC created a market-majority software platform, which also was the first really big third-party-app ecosystem in computing.
Oops
But then, a funny thing happened. Almost as soon as they had created it, IBM abruptly lost the PC market to the cloners. In what might be the biggest blunder in the history of business, IBM somehow failed to adequately protect its rights to the essential specs of its own product, and found itself in a situation where any company could make cheap PC clones that would run all the same software that ran on the PC. The cloners didn’t have to pay IBM a nickel, or even get permission to do it.
IBM tried valiantly to posture its PC as superior to the clones, but in vain: the same businesses (and especially home users) that had handed IBM control of personal computing, now took it away and gave it to the cloners. The IBM PC fast-faded to an irrelevant shell of its former self.
Manna For Gates
And who was the big winner of the cloners’ victory? Microsoft. The cloners didn’t have to get approval from IBM, but to be able to run all that PC software, they did have to license MS-DOS, the operating system of the IBM PC, from Microsoft. And Microsoft did not accidentally let go of its rights to that OS. So Microsoft suddenly found itself in de facto control of the whole PC market, able to dictate terms and specs to a host of cloners (thereafter known as “Microsoft OEMs”). And over the next several years, Microsoft ballooned into a money-machine powerhouse like nothing else in tech, making CEO Bill Gates the richest person in the world.
Vicious
In 1984, after all of the above had come to pass, Apple tried to re-enter the market with the brand-new Macintosh. It had some distinct advantages (graphical user interface, mouse, high-resolution bitmapped graphics) and some disadvantages (black-and-white, pricey, tiny screen). But the overwhelming problem for the Mac wasn’t anything as technical as that — the MS-DOS PC simply had a huge market of apps, and the Mac was starting basically from zero. The “virtuous circle” was working for MS-DOS, while the “vicious circle” was working against the Mac — i.e. most people want to buy the platform for which apps are being written, and they want to write apps for the platform that most people are buying. The only chance the Mac had to buck this circle was its distinct hardware advantages, but they weren’t enough. Even before Microsoft came out with Windows, the Mac’s peak market share in personal computers was sub-10%.
And, of course, Microsoft did come out with Windows, an obvious, cheesy rip-off of the Mac. Apple sued, unsurprisingly, and came very close to winning big. But in the end — after a jury had awarded Apple a massive verdict — appeals-court judge Vaughn Walker threw the whole case out the window, sinking the Mac’s hopes, and cementing the market dominance of Windows for many years to come.
Random
The take-home lesson of this story seems to be that new markets are chaotic and random, and until a solid victor is established, unpredictable events can have profound effects on how things shake out. How would things have turned out if IBM had stuck to mainframes and never entered the microcomputer market (or only much later)? What would have been the end result if Microsoft hadn’t dared to copy the Mac, or if Walker had decided to uphold the jury’s verdict? What if IBM had properly retained the rights to its own product, and demanded exclusivity for its OS? There’s no reason why things had to wind up the way they did; they could have been very different.
Pundits
But many tech pundits preferred not to see this, and instead tried to convince themselves (and anyone within earshot) that the victory of Microsoft and its OEMs was somehow inevitable. All through Apple’s 2000-to-present ascendancy, these pundits repeatedly mispredicted that Microsoft and its OEM gang (or a similar OEM alignment under Google’s Android) would supplant every Apple creation. And although some pundits learned their lesson after one or two comically incorrect forecasts, most of them reacted to Apple’s repeated successes with ever-shriller cries that the company had peaked, or that its heyday was over, or that it was about to fall away into has-been irrelevance. The common theme among them seemed to be that Apple’s post-2000 success is the freak anomaly, which soon will be re-replaced with the normal control of tech by a gang of un-innovative OEMs, led by an equally un-innovative app platform manager.
Most of these pundits had no advanced intellectual theory to back their beliefs. They just stated them as if they were obvious, and offered no explanation why. “We’ve seen this movie before,” was about as analytical as they got, never offering anything like a sophisticated reason to believe in Apple’s impending downfall.
Ivory Tower
But one did: Harvard’s Clay Christensen.
Although it can only ever be my unverifiable suspicion that Christensen shaped his theory to satisfy the desire to cast Microsoft’s pre-2000 dominance as a perpetual, natural phenomenon, we can definitely say that he has taken full opportunity to portray that dominance as bolstering his theory, and has used his theory as justification to believe that Apple’s post-2000 dominance is not long for this world.
Famous as the creator of “disruption theory,” Christensen posits that an innovator like Apple can enjoy only temporary success with a new-product-category breakthrough, before being ravaged by “modularity” — commodity component manufacturers banding together to make a good-enough, copycat product that beats out the innovator’s original. To stave off doom, the innovator must be perpetually on the run, skipping from one new-product-innovation to the next.
To get a taste of the thinking, here’s James Allworth, Christensen’s prime protégé and co-author of Christensen’s 2012 book, How Will You Measure Your Life?, describing Apple’s current level of success as being virtually a violation of physics (Exponent #24):
What caused [the ’90s Mac] to be crappy? If it was so far in front, how were modular players able to catch it up, and surpass it? So, these theories are never perfectly predictive. We’ve talked about this in terms of it being a very long, multivariable equation, and each one explains some of it. And I think what the theory would assert is that what you [Ben Thompson] just described is the standard thing that happens in an integrated-vs.-modular world. At the start, when the complexity and solving the problem is actually figuring out how all the components best work together, that the integrated player wins. But once those pieces have been figured out, it’s actually the modular players, who are just best able to focus on their own little piece of the puzzle, that will end up winning. And that’s why Microsoft, Intel, like Wintel ended up beating— the theory’s explanation would be, Wintel beat Apple on the basis of that. The reason that the Mac sucked was because they were trying to do too many things. And an organization, a typical organization that bites off that many things is not gonna be able to do it.
Now, what get’s interesting is the question around, it appears that if you believe that, and you believe that the theories that we’re referring to, integration-vs.-modularity and disruption, hold for a large explanatory power in that multivariable equation, that somehow Apple right now is defying gravity. It’s defying the theory. And I think that’s what makes it interesting. And I think your explanation of [Apple customers’] great experience is very compelling in explaining why they’ve managed to do as well as they have. What’s interesting— I mean, there’s also another interesting question to me, which is, like, why is it the way it is right now, versus the way it played out previously? Because I think the way it played out previously is the way that it typically does play out. Like, it makes sense to my mind that that’s what happens; the integrated player can’t develop all these different pieces, and sustain scale, and figure it out once all the interdependencies have been worked out. And yet, somehow Apple’s managing to do it right now.
Somehow indeed. The essential flaw in Allworth’s logic comes right at the start: The ’90s Mac was not “far in front”; it was always a sub-10% niche player. Crucial facts like this get brushed under the rug in the Apple naysayers’ history books, and such is essential to their case. Here’s Christensen himself explaining how Apple’s pre-2000 history fits into his thesis (as quoted by Horace Dediu):
[T]o be fast and flexible and responsive, the architecture of the product has to evolve towards a modular architecture, because modularity enables you to upgrade one piece of the system without having to redesign everything, and you can mix and match and plug and play best of breed components to give every customer exactly what they need. ...
So here the rough stages of value added in the computer industry, and during the first two decades, it was essentially dominated by vertically integrated companies because they had to be integrated given the way you had to compete at the time. We could actually insert right in here “Apple Computer.” ... Do you remember in the early years of the PC industry Apple with its proprietary architecture? Those Macs were so much better than the IBM’s. They were so much more convenient to use, they rarely crashed, and the IBM’s were kludgy machines that crashed a lot, because in a sense, that open architecture was prematurely modular.
But then as the functionality got more than good enough, then there was scope, and you could back off of the frontier of what was technologically possible, and the PC industry flipped to a modular architecture. And the vendor of the proprietary system, Apple continues probably to make the neatest computers in the world, but they become a niche player because as the industry disintegrates like this, it’s kind of like you ran the whole industry through a baloney slicer, and it became dominated by a horizontally stratified population of independent companies who could work together at arm’s length interfacing by industry standards.
There’s really no nice way to say it: Christensen’s history of Apple is fiction. There is no mention of the fact that the Microsoft-plus-OEMs arrangement was in full force before the Mac even hit the market. There is no mention of the fact that the Mac never broke 10% market share. And his bit about crashiness is utterly false; early Macs crashed plenty.
Things turned out the way they did not because they “had to,” but because of a fortuitous sequence of freak events. No business theory predicts these events, nor can such be defended by these events; they illustrate only the unpredictability of what happens in an unstable, new market. Christensen is forcing history to fit his theory, apparently because he’s simply in love with it (and/or has driven his reputation so deeply into it that it’s too late to back out now).
Probably Christensen’s most famous misprediction occurred in mid-2007, just as iPhone was being released:
[T]he prediction of [my disruption] theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.
Note that Christensen here is quite specific, not just that iPhone won’t be a success, but that it is the prediction of his theory that it won’t be a success. And of course, we all know what happened: iPhone is among the all-time most successful products ever made. What does that say about Christensen and/or his theory?
Suppose that sometime in the past couple years, Christensen had written a chapter-length article, explaining, very specifically, his misprediction of iPhone failure. (Christensen has written several books, many articles, and he teaches classes and give speeches all the time — he writes for a living, and an article of the size I am about to describe would be no big effort for him.) Now suppose that in part one of this article, he explained in full detail, step-by-step, how he applied his theory in mid-2007 and arrived at this result. What data did he use, how did the theory interpret that data, etc.
Suppose that part two of the article identified — again in high detail — exactly what went wrong in part one, that would cause it to predict such an extreme opposite of actual events. Then suppose that in part three, he re-applied his theory — this time correctly, without the mistakes identified in part two — to the data available in mid-2007, and this time he arrived at the retrodiction that iPhone would be a phenomenal success.
The paper could conclude with a section explaining how users of Christensen’s disruption theory (including Christensen himself) could avoid making similar mistakes, and thus hopefully be able to apply the theory in a way that is unlikely to predict failure for highly successful products, or vice-versa.
Now, such a paper could still be very wrong. The analytical mistakes identified in part two might not be the actual mistakes that led the theory astray. The re-analysis in part three might arrive at the correct conclusion due more to 20/20 hindsight than any actual abilities of the theory to accurately assess a product’s prospects. But — at least the existence of such a paper (written in good faith) would indicate that Christensen takes his own theory seriously! It would show that he really believes his theory has useful value, and is attempting to keep it that way.
So do we actually have a paper like this from Christensen? I haven’t exhaustively researched his writings, so I can’t say definitively that no such document exists. Nevertheless, I think if there was such an article, I would have heard about it by now. Which means we can reasonably conclude that Christensen isn’t even trying to maintain his theory as an item of genuine academic value. Or put more bluntly, he doesn’t really believe his own theory — it’s just a position, a pretentious posture, an intelligent-sounding re-description of events that have already occurred.
Disrupted
Last June, another Harvard professor, Jill Lepore, wrote possibly the first high-profile takedown of Christensen’s theory, “The Disruption Machine — What the gospel of innovation gets wrong” (The New Yorker), and Christensen et al. didn’t take it very well. With zero pushback from co-host Thompson, Allworth (Exponent #8) described Lepore’s criticism as “strangely personal” even though the whole article was about nothing but Christensen’s theory and how his examples of his theory in action don’t really fit the theory. Allworth apparently did not think it strangely personal when Christensen immediately responded to Lepore’s article with a flustered, defensive, Businessweek interview, in which he called her criticisms “egregious ... truly egregious,” as well as “a criminal act of dishonesty,” and said her article was an attempt “to discredit Clay Christensen, in a really mean way.” He also referred to himself in the third person while repeatedly referring to Lepore as “Jill.” The interviewer called him on it in the final question:
You keep referring to Lepore by her first name. Do you know her?
I’ve never met her in my life.
For all his flabbergasted paroxysms, in the Businessweek interview Christensen did manage a tiny attempt to explain (spin?) his 2007 misprediction of iPhone failure (quoted here in its entirety):
So the iPhone: There’s a piece of the puzzle that I did not understand. What I missed is that the smartphone was competing against the laptop disruptively. I framed it not as Apple is disrupting the laptop, but rather [the iPhone] is a sustaining innovation against Nokia. I just missed that. And it really helped me with the theory, because I had to figure out: Who are you disrupting?
Even taking this apparent mea culpa at face value, one may wonder: If the master of disruption theory can see a mistake like this only in several-year hindsight, then how is anyone supposed to make use of the theory?
But never mind that; there’s a much bigger, factual problem here. In iPhone’s first five years, all four mid-2007 makers of successful smartphones — Nokia, Palm, RIM, and Motorola — fell into abysmal malaise, at best. And yet... iPhone succeeded by disrupting laptops?? No. It didn’t. iPhone succeeded by blowing away the prominent smartphones of the time. Christensen didn’t “miss” anything, or fail to “understand” some sideways “disruption” — he was just wrong. Flat-out wrong. It’s not mean to say so, it’s not egregious, and it’s definitely not criminal. Christensen thought iPhone wouldn’t stand up to other smartphones, and it did. He was simply, purely, 180° incorrect.
Four months later, Christensen collected himself for a more intellectual-sounding defense, in Business Insider’s, “Harvard Management Legend Clay Christensen Defends His ‘Disruption’ Theory, Explains The Only Way Apple Can Win,” in which he was interviewed by Business Insider founder Henry Blodget. In the interview’s introduction, Blodget tells us that “Harvard’s Clay Christensen is today’s most influential modern management thinker,” and that he (Blodget) is “a huge devotee” of Christensen’s fame-founding book, The Innovator’s Dilemma. Blodget also calls Lepore’s article “a very personal attack.” None of this should be surprising coming from Blodget, who helpfully informed his readers in the spring of 2011:
“Android Is Destroying Everyone ... iPhone Dead In Water”
“Apple is fighting a very similar war to the one it fought — and lost — in the 1990s.”
And who in the fall of 2013 warned:
“What Apple does not seem to understand, however, is the fate that almost all niche platform providers eventually succumb to — gradual loss of influence, power, and profitability, followed by irrelevance. If you don’t understand what happens to platform providers that lose momentum and become niche players, just look at BlackBerry. ... What Apple zealots who crow about the company’s current profitability should recognize is that BlackBerry was highly profitable only a few years ago.”
“[E]ven if Apple does not get completely marginalized, [its] strategy will likely hurt the company and its shareholders (and its customers!) over the long haul. And it could be disastrous.”
So it would appear that Christensen found a friendlier forum this time.
Though Christensen’s composure is notably recovered from his Businessweek embarassment, he still manages to tell us that Lepore’s article was “personal,” and that “all of the points that she raised were not just wrong, but they were lies. Ours is the only theory in business that actually has been tested in the marketplace over and over again. ... for her to take that on, to take me on and the theory on — I don’t know where the meanness came from.” He also refers to himself in the third person again, and it’s unclear if he is willing to use Lepore’s surname; the only place in the transcript where he refers to Lepore by any name at all reads “[Lepore]” (i.e. in brackets), leaving the reader to wonder what he actually may have said.
He does little to convince his audience that the essential meat of his theory actually survives Lepore’s points unscathed, but he does manage to vomit up more unsupported insistence that Apple is running out of road:
CC: ... Apple won’t succeed, because in the end modularity always wins.
HB: And what the Apple believers will say is, “... they’re going to sell 65 million iPhone 6’s in the first quarter, so could you be more wrong?” That’s what they will say.
CC: That’s right. And what Clay will say in response is that you can never predict where the technology will come from, but you can predict with perfect certainty that if Apple is having that extraordinary experience, the people with modularity are striving. You can predict that they are motivated to figure out how to emulate what [Apple is] offering, but with modularity. And so ultimately, unless there is no ceiling, at some point Apple hits the ceiling. So their options are hopefully they can come up with another product category or something that is proprietary because they really are good at developing products that are proprietary. Most companies have that insight into closed operating systems once, they hit the ceiling, and then they crash.
Allworth also pounds this perspective on a regular basis. Here he is on Exponent #11:
“The way I like to think about this is whether it makes sense to be integrated or modular. That varies over time based on where you are in the product category life cycle. And at the beginning of that, when performance is absolutely not good enough, it makes most sense to be integrated, because you have most control over all the different components. As time goes by, the modular players can see how the integrated player has put everything together, and they can start to copy it, and pull it all apart, and focus on their own little [piece]. And, at least the theory would suggest, the modular players start to catch up. And that was the reason why I asked the question, because it might be the case that the difference between iOS and Android is starting to narrow. I would actually say that’s true.”
“When they [Apple] run out of things to improve, when the things to improve become less obvious, the ability to increase performance, to improve the experience, is relatively limited. I’m just going to put it in provocative language: It becomes easier for the modular players to copy the integrated player, because the integrated player runs out of places to go.”
Ask yourself if their position improved even slightly in the well-over-two years since Christensen was Dediu’s guest on The Critical Path, here quoted in Forbes by Steve Denning:
Christensen’s hope of salvation for Apple is not so much that they can defeat the innovator’s dilemma or the threat of modularity but rather that they can defer it indefinitely:
“The salvation for Apple may be that they can find a sequence of exciting new products whose proprietary architecture is demanded by the marketplace, and they can keep going from one product to another so that they will not have to confront this dilemma.”
Like a scratched vinyl record, Christensen, Allworth, and (of late) Dediu hammer home the received wisdom that Apple must forever flee before an advancing army of modular commoditizers that greedily devours anything Apple designs, and it is only Apple’s deft skill at this never-ending flight that has kept it alive and financially healthy for the past fifteen years. If they say it enough times, will it become true? Non-rebootable careers at Harvard notwithstanding — no. It won’t.
Innovation
The normal state of affairs is that the innovator is very, very successful. The innovator makes not just a quick killing in the early market, but continues to dominate that same market as it matures. The innovator re-invests its earnings into the product, improving it, and staying ahead of would-be competitors.
Copycat products carve out little niches in whatever areas the original innovator didn’t care to go, such as the ultra-cheap, zero-profit low end, or exotic, special-feature versions of the product.
Apple’s re-investment strategy goes beyond simply improving the product or making the experience better. Apple has shown a clear pattern of seeking to control more about their products, which not only protects them from backstabbing partners (of which there have been many), but actually improves their product in ways that can’t be matched by competitors. Apple’s use of ARM chips in iPod gave economy of scale to everyone who used ARM chips, but Apple’s use of its own A8X chips in its latest iPads helps no tablet maker but Apple.
How does anything that’s been happening with Apple in the past fifteen years fit into Christensen’s disruption theory? It doesn’t. How does it fit into the more general punditry’s cries that Apple is on the verge of being run over? It doesn’t.
Apple’s market valuation is now about fifty percent greater than that of any other company on the planet in any business, at a time when many market analysts think its stock is undervalued. Apple’s only serious competitor, Samsung, is going into a financial tailspin. Apple is living, breathing proof that the innovator typically succeeds. The innovator normally wins — despite a fifteen-year anomaly when Microsoft got to run the show.
Update 2015.03.25 — added “With zero pushback from co-host Thompson”
Update 2015.05.27 — added “Ask yourself if ... confront this dilemma.”
See also:
The Old-Fashioned Way
&
Apple Paves the Way For Apple
&
iPhone 2013 Score Card
&
Disremembering Microsoft
&
What Was Christensen Thinking?
&
Four Analysts
&
Remember the iPod Killers?
&
The Innovator’s Victory
&
Answering the Toughest Question About Disruption Theory
&
Predictive Value
&
It’s Not A Criticism, It’s A Fact
&
Vivek Wadhwa, Scamster Bitcoin Doomsayer
&
Judos vs. Pin Place
&
To the Bitter End
See also:
The Self-Fulfilling Prophecy That Wasn’t