-
PDF
- Split View
-
Views
-
Cite
Cite
Nicolas Petit, David J Teece, Innovating Big Tech firms and competition policy: favoring dynamic over static competition, Industrial and Corporate Change, Volume 30, Issue 5, October 2021, Pages 1168–1198, https://doi.org/10.1093/icc/dtab049
- Share Icon Share
Abstract
This paper gives a fresh account of competition in the digital economy. Economic analysis in the field of industrial organization remains largely focused on a sophisticated version of the Schumpeter–Arrow debate, which is unresolved and largely irrelevant. We posit the need to look at competition anew. Static models of monopoly firms and markets in equilibrium are often used to characterize Big Tech firms’ size and scope. We suggest that this characterization is inappropriate because the growth and diversification of many digital firms lead to a situation of broad-spectrum competition that cuts across markets. Current market positions do not reflect entrenched monopoly power but are vulnerable to competitive pressure of disequilibrating forces arising from the use of data-driven operating models, astute resource orchestration, and the exercise of dynamic capabilities. A few strategic errors by management in the handling of internal transitions and/or external challenges and they could be competitively impaired. The implications of a more dynamic understanding of the competition process in the tech sector are explored. We consider how big data and entrepreneurial management impacts firm performance. We also explore the nature of different types of rents (Schumpeterian, Ricardian, and monopoly rents) and suggest a modified long-term consumer welfare standard for competition policy. We formulate preliminary tests and predictors to assess dynamic competition. Our perspective advances a policy stance that favors innovation.
1. Introduction
In the past 20 years, a small group of large US firms known under the moniker of “Big Tech” has developed many of the digital products and services that consumers use.1 These firms are Apple, Amazon, Facebook, Google, Microsoft, and Netflix. The expansion of these business organizations is a defining feature of today’s digital economy. They have accounted for over 50% of the growth in equity value in US markets over the past 20 years. The rise of Big Tech has raised many political concerns. In this paper, we focus on competition policy and sidestep issues relating to democratic threats, the control of content, or free speech. We believe such issues are analytically separable from the monopoly power problems that competition policy concerns itself with.
The rise of Big Tech firms is having the welcome effect of causing a resurgence of interest in industrial organization. The emerging scholarship is mixed. On the one hand, there is a tendency to treat Big Tech firms as different because innovation in general (both technological and business model), and technical inputs in particular (big data, intelligent algorithms, and skilled engineers), clearly impact market structure and economic performance. On the other hand, industrial age explanations like monopoly power, anticompetitive leveraging, and predatory mergers are often used to supply theories for the durability and diversification of Big Tech firms. There is little or no mention of the role of entrepreneurship and management or of new operating models that deliver value in new and better ways.
We are skeptical about the power of popular narratives to account for the totality of the competitive circumstances at hand. Our skepticism is aroused by the record of the Big Tech firms2 There are many indicators suggesting that dynamism, not a base of monopoly power, is what is at work. The digital economy shows unprecedented productivity growth, rapid innovation, and new firm entry. In consumer digital goods and services in telecommunications and broadcasting, output has risen, quality has increased, and prices have declined (Byrne and Corrado, 2020). This state of affairs could not reasonably exist if Big Tech firms were dominant players that suppressed competition by using scale, supposedly like the large iron, oil, and steel trusts of the industrial age. Theoretically the development and growth of the digitial sector could be even higher, and the welfare benefits greater, without Big Tech firms. However, proponents of the monopoly argument are yet to articulate the “but for” ideal world that they imply would otherwise exist.3 Our intuition, thus, strays from the monopoly explanation. Instead, we might be observing a group of diversified Big Tech firms coexisting and competing in oligopoly with each other vigorously and with new and adjacent firms entering the fray from time to time. One of us referred to this broad-spectrum competition as the “moligopoly” hypothesis (Petit, 2020). A similar interpretation was given in 2021 by The Economist, which noted that monopoly explanations were “getting harder to sustain” as digital markets in the US are “shifting towards oligopolies in which second and third firms compete vigorously against the incumbent” (The Economist, 2021).
The inability of monopoly narratives to properly account for the source of the long-term competitive advantages and the fluid boundaries of Big Tech firms requires searching for alternative theories. As Marshall Van Alstyne puts it, “we need new and better economics and legislation to get this right—and to understand that the nature of creating value has shifted drastically.”4 Nobel laureate economist Arrow (1996) had forewarned us 20 years ago that “the role of information would seem to require a new approach to the theory of oligopoly”. New approaches are needed not just because of the growing importance of information but also because of the nature of innovation and associated data orchestration. In addition, business models have changed dramatically and (digital) ecosystems are now a common feature of the competitive landscape (Jacobides and Lianos, this issue).
With this background, we find it necessary to advance and further develop what we call the dynamic competition paradigm. As a matter of theory, the dynamic competition paradigm treats competition and innovation as co-determinant of changes in market structure and firm positions. And in applied terms, a dynamic competition paradigm allows observation of industry level differences in how changes to levels of market concentration or innovation affect competitive outcomes. The point, simply put, is that not all industries require a similar mix of competition and innovation to raise consumer welfare by a similar level.5
It is our conjecture that a better understanding of dynamic competition in general, and of organizational capabilities, business models, and ecosystems in particular, would result in a more careful approach to competition policy that is currently poised to favor increased intervention towards Big Tech firms. In 2020, the European Commission proposed a “Digital Markets Act” that aims to rein in Big Tech’s “gatekeeper” power through application of per se prohibitions to categories of business conduct and business models considered unfair or injurious to competition. And in June 2021, the US House Democrats and Republicans introduced five antitrust bills that propose to subject Big Tech firms to severe obligations including an M&A ban, data portability and interoperability requirements, and line of business restrictions. Type I errors that reduce innovation, growth, and prosperity will likely flow from this policy trajectory (Concerns shared with Jenny, this issue). Other policies, for instance, stronger self-regulation (Cusumano and Gawer, this issue), might protect consumers from losses and preserve innovation incentives.
In this paper, we try to bring forward new economic insights. We believe it is important for competition policy to prioritize innovation as a policy goal and to adopt analytical frameworks that account for dynamism or the lack thereof. Moreover, in order to support and advance innovation, it is critical for competition policy to embrace an intermediate-to-long-term orientation. Short termism is not only the enemy of good management, but it is also the enemy of good public policy.
Our goal is to join forces with other scholars and policy analysts and competition lawyers including the authors in this special issue of ICC, to advance a conceptual framework that favors dynamic competition. This framework undergirded by a systematic (not ad hoc) theory of innovating digital firms. We also focus on understanding the origins of rents in the digital economy. We develop operational welfare criteria, a competitive process formulation, and predictors for the assessment of long-term competitive effects under uncertainty.
2. Static vs dynamic competition
The analytical frameworks that have informed competition policy in modern economies tend to favor static notions of competition. In this section, we (i) give definitions, (ii) provide a short intellectual history of static and dynamic competition, (iii) discuss the need for further consideration of dynamic competition and (iv) the reasons for the persistence of static competition analysis in modern competition policy. We close with a description of possible models of innovation that could be used in competition policy.
2.1 Definitions
Static competition describes a situation in which firms compete for existing rents. In static competition, firms supply close to perfect substitute products. Rivalry results in short-term price decreases, cost-cutting, including wage reductions.
Dynamic competition, on the other hand, describes a situation in which firms compete for future rents. In dynamic competition, firms use innovation to introduce new products, processes, and services. Rivalry results in product differentiation, recombination, integration, diversification, or platformization. It is a type of competition animated not by firms that compete head-on with similar products but by heterogeneous competitors, complementors, suppliers, and customers, using innovation to bring forth new products and processes. Such competition improves long-term factor productivity, raises consumer welfare, and supports higher wages.
2.2 Intellectual history
The theory of dynamic competition has prestigious intellectual origins, but it is also one of enduring policy marginalization. Schumpeter stands as the father of theories of dynamic competition. Schumpeter (1942: 83) observed over half a century ago that dynamic competition is much more effective at improving consumer welfare than is static competition. He analogized static versus dynamic competition to the difference between bombardment and forcing a door. Dynamic competition is so much more important that:
it becomes a matter of comparative indifference whether competition in the ordinary sense functions more or less promptly; the powerful lever that in the long run expands output and brings down prices is in any case made of other stuff.
The “other stuff” Schumpeter referred to is innovation, which, through the introduction of new products and processes, is the more powerful form of competition that both erodes and destroys existing profit streams (1942: 84). Unfortunately, Schumpeter did not make his perspective operational in any meaningful sense. Besides, Schumpeter left many stones unturned. Schumpeter did not draw differences between types of technologies. And it remains open to interpretation whether the “creative destruction” that Schumpeter talked about is a “continuous” process, or one that occurs in “perennial gales”, leaving open the question of what should be done in the interim.
Friedrich A. Hayek, an intellectual leader of the Austrian School and eventual Nobel laureate, is another key figure in any discussion of theories of dynamic competition. Hayek (1948: 94) argued that “competition is by its nature a dynamic process whose essential characteristics are assumed away by the assumptions underlying static analysis.” The implication that Hayek recognized is that one cannot regard the wishes and desires of consumers as information given to producers; instead, one must view the task of identifying consumers’ preferences as a problem that the process of competition itself can discover. But Hayek and Austrian economics did not fare better than Schumpeter in terms of policy influence.6 Because the essence of competition is the dynamic pattern by which competition arises and proceeds, not the equilibrium never attained, the Austrian school disfavors deterministic models of competition and favors an understanding of competition as an emergent process. Unamenable to estimation, optimization, and prediction, The Austrian perspective’s avoidance of estimation, optimization, and prediction made it remote from the practical demands of public policy, formulation, and implementation.
The recognition of the importance of dynamic competition could well have happened when the Chicago School bequeathed to the world the field of law and economics in the 1960s. Chicago made a magnificent intellectual contribution to policy by injecting economics into the law. Nobel Laureate Ronald Coase’s “The Problem of Social Cost” (1960) was perhaps the beginning of that new field. Insights and methodologies spilled over to the emerging subfield of antitrust economics. Microeconomic theory was employed to provide new and valuable insights. Unfortunately, Marshallian and Samuelsonian microeconomic theory afforded little room for incorporating technological innovation. R&D was just a cost with uncertain benefits. Efficiency, not innovation, was seen as the goal of the business enterprise. The standard tools of micro-economics under perfect competition were employed. Firms were viewed rather primitively as “production functions.” Along the way, Bork (1978: 60) urged the antitrust community to use the model of perfect competition “as a guide to reasoning about actual markets” and illustrate allocative efficiency.7
The post-Chicago revolution of the 1980s did little to change the direction of travel. Competition policy absorbed known features of modern industrial organization research and in particular its heavy leaning toward theory (Tirole, 1988). As competition policy became more theory driven, its analytical tools have tended to oversimplify hard-to-model empirical phenomena like the impact of innovation on competition. Game theory, for example, supplied general explanations to empirical regularities found in oligopoly markets but has failed to give predictions reflective of the complexity of marketplace competition because it is dependent upon unattainable exactitude in the specification of firms’ strategies and timing of actions (Fisher, 1989). The well-known, and elegant, modern theory of multisided markets has similar shortcomings. Multisided market theory has produced multiple efficiency and inefficiency possibility theorems, without however supplying clear policy guidance to real-world decision-makers. And when economics have tried to be more empirical, innovation has been measured by proxies like patent counts and R&D expenditure, which give at best crude insights and occasional clues about the complexity of processes involved in innovation-led dynamic competition.
Notwithstanding these shortcomings, no policymaker would disagree with the statement that innovation brings competition. Yet, despite its obvious importance, dynamic competition is not embraced as widely in practice as is static competition. One reason for this neglect is instrumental. It is harder to measure dynamic than static competition. Innovation often does not show up immediately in directly observable economic statistics, such as prices, markups, or cost data. Another reason is behavioral. Often, there is a tradeoff between long-run innovation benefits from dynamic competition and possible short-term reductions in price competition. And because of our well-documented tendency to discount future rewards more than present ones, the tradeoff is often resolved by giving preference to static losses and gains, even when transient.
There have been a few voices reminding the law and economic and competition policy community of the importance of innovation. One of us (Teece) has persisted, making the case for the primacy of dynamic competition for over 30 years, noting:
… in the world of high technology there is high uncertainty and supercharged competition… waves of new product introductions are frequently accompanied by premium prices initially, followed by rapid price declines… antitrust economics and the industrial organization literature manifests a limited understanding of the nature of competition in high-technology industries, where competition is driven by innovation. Among the public policy (and economic) issues that have not been well explored are the evolutionary processes at work, [and] the nature of the sources of economic rent…8
Today, Schumpeterian and Austrian perspectives that embrace deep uncertainty are recognized as highly relevant to the competitive activity of the tech sector. The widely popular “lean startup” model (Ries, 2011) emphasizes this basic point. Perhaps, a reexamination of the historical marginalization of dynamic competition is needed.
2.3 The need for a dynamic competition paradigm
Interestingly, the need for competition policy to consider dynamic competition has been apparent long before the advent of Big Tech firms. In 1985, former head of the Department of Justice’s antitrust division Bill Baxter wrote “the contribution of technological advances to our economic well-being is very substantial when compared to the damage that could be caused by restrictive behavior the antitrust laws seek to halt” (Baxter, 1985: 82). Twenty-five years later, Federal Trade Commissioner Rosch (2010: 3) found that circumstances had not changed very much. Attempting to explain why the enforcement agencies had failed to embrace dynamic competition, his candor was both revealing and concerning:
Antitrust enforcement has historically focused on static [rather] than dynamic analysis… for a number of reasons. First the antitrust community… both lawyers and economists…have far greater familiarity and comfort with static analysis rather than dynamic analysis. Second, there is less incentive for parties to take the time to develop arguments based on dynamic analysis. Third, there’s the perception – right or wrong – that dynamic analysis is less well developed and less measurable than static analysis.
Almost a decade later, commissioner Christine Wilson of the US Federal Trade Commission lamented again that frameworks that incorporated dynamic competition had been neglected noting that:
the economic literature also acknowledges that innovation over the long run will deliver very large consumer welfare gains.
She also noted that competition policy authorities:
routinely struggle to account for dynamic effects.9
Finally, about 5 years ago, the Organization for Economic Cooperation and Development stressed that “the methodology of competition authorities should move from a focus on static competition towards dynamic competition” without, however, lessening their “commitment to the rigour of evidence-based enforcement” (OECD Secretariat, 2017: 3). Baxter, Rosch, Wilson, and the OECD calls to integrate dynamic competition analysis in policymaking have remained essentially unanswered.
Despite some limited progress, static competition dominates the analytical models employed in competition policy. To be more specific, competition policy in effect makes (i) extensive use of equilibrium models when digital technologies display disequilibrium properties; (ii) relies essentially on industrial economics expertise and only marginally uses insights from business and technology management; and (iii) often eliminates uncertainty in order to formulate simple rules (see the parallels with Soros, 2013).10 We are thus still far from the coherent paradigm change called for by some agency officials and competition policy institutions.
2.4 The market structure-innovation “trap”
Why does this state of affairs in competition policy exist despite the fact that many very good scholars have worked hard to improve the field and its tools?11 In our view, focus on innovation as the driver of competition has been thwarted by a theoretical trap or dead-end that has preoccupied economic scholars for too long. The economic literature has latched on to one of Schumpeter’s many hypotheses, i.e. the one that hypothesized that monopoly was needed to help fund innovation and compensate for the risks. This has been juxtaposed with Arrow’s (1962) model which compared the additional profit to be gained from a process innovation in perfect competition with another process innovation in a monopoly market protected by an iron-clad patent. An innovating monopolist already earns supranormal profits and just replaces existing profits with a small improvement. For this reason, it is conventionally postulated that a monopolist may have less incentive to innovate than a firm in a particularly competitive market (Federico et al., 2020).12 Carrier (2008: 396) has described the Schumpeter–Arrow debate as “one of the most heated discussions in economics in recent years”.
The Arrow–Schumpeter juxtaposition is, however, not the major story in dynamic competition.13 As Gilbert (2006: 162) notes, there are many factors at work and:
economic theory does not offer a prediction about the effects of competition on innovation that is robust to all of these markets and technological conditions. Instead, there are many predictions….
In particular, both the Schumpeterian and the Arrow hypotheses are highly stylized and not particularly relevant to real-world circumstances. Take Arrow’s model if monopolists will not give up their current revenue (i.e., cannibalize their rent), it leaves the door wide open for new entrants. To be sure, one should be alert to “dirty tricks” by an incumbent monopolist. But some form of contestability should help save the day. Similarly, the Schumpeter risk-based requirement for investment in R&D and innovation is overturned once an assumption is introduced that investments from multiproduct firms or venture capitalists are available, which it is in most modern economies.
Technology opportunities are another variable of importance that is not in Arrow or Schumpeter. Schumpeter talked about “perennial gales of creative destruction.” Technological opportunity (which can flow from developments in science and in enabling technologies) may shed light on the market structure enigma discussed in the previous section. A leading textbook (Scherer and Ross, 1990: 645) notes that “the structure-to-innovation linkage probably operated over a much shorter time span than the innovation-to-structure linkage.” This second linkage is expected to be stronger in industries with rich technological opportunities. The idea is that market concentration is more conducive to innovation in slow-moving fields, whereas technological opportunity, which can give rise to radical breakthroughs, favors newcomers, not incumbents. Besides, Dosi has shown that concentration might change the trajectory of innovation, because “the possibility of enjoying temporary monopoly (and long-run oligopolistic) positions on new products and processes appears to act as a powerful incentive to innovative activity, and the improvement of existing products” (Dosi, 1982). These refinements seem plausible; however, recent empirical work suggests that the relationship between technological opportunities and market leadership is far from straightforward (Fai, 2007).
Overall, Schumpeter–Arrow juxtaposition has monopolized the intellectual debate about how competition impacts innovation, and it has yielded little. Carrier, again, writes that “After a half-century of debate and innumerable studies, the consensus is that there is no clear answer to the question.” Given the failure of the Schumpeter–Arrow debate to produce policy-relevant insights, a more fruitful intellectual inquiry is necessary, and in our view, it can be found in the technology management literature, discussed below.
2.5 Some other models of dynamic competition
Different models of competition and innovation must be brought to the table. As noted earlier, some of these models can be found in the field of (technology) management. Fortunately, these other models can inform competition policy. They include important work from Klein (1977), Christensen (1997), O’Reilly and Tushman (2008), Abernathy and Utterback (1978), and Jacobides et al. (2006).
Klein (1977) and Abernathy and Utterback (1978) refined Schumpeter’s paradigm of industrial change and postulated an innovation cycle. Considerable evidence now supports this paradigm over a wide range of technologies.14 That evidence implicitly recognizes inflection points in technological and market evolution. The advent of new technological ensembles or paradigms is usually marked by a wave of new competitors entering an industry to sustain success. Incumbents must master discontinuities as well as incremental change and improvement. They often are caught in dilemmas and, absent strong dynamic capabilities, fail to respond or are unable to respond because they lack relevant skills and assets.
Abernathy and Utterback’s model recognizes that dynamic competition is a process in which entrepreneurs and entrepreneurial managers are important actors. Competition before a dominant design emerges (the preparadigmatic phase) is different from the post-dominant design phase (the postparadigmatic phase). New entrants and management teams in large incumbent companies drive growth and innovation.
O’Reilly and Tuhsman explain that changes are necessary for an incumbent to survive when it faces competence destroying innovation. They suggest that change is not easy, because larger firms develop inertia, which is “the organizational equivalent of high cholesterol”. Successful firms that adapt to change perform shifts in strategy, structure, skills, and culture. For example, Apple, moved successfully from a single product strategy (selling the Apple I PC) to sell a “broader range of products,” with a “market wide emphasis”.
Christensen’s model of innovation has caught the attention of many executives and some academics. His “Disruption” model is outlined in the “Innovator’s Dilemma” (1997). He sought to answer two main questions: (i) why is durable competition advantage so difficult to maintain and (ii) is innovation really as unpredictable as the data suggests. His model was built from close observation of the disk drive, mechanical excavators, and integrated steel industries.
Management plays a key role in Christensen’s model. The dilemma he saw was that “the logical, competent decisions of management that are critical to the success of their companies are also the reasons why they lose their positions of leadership.” As Christensen remarked:
Disruption technologies bring to a market a very different value proposition… generally disruptive technologies underperform established products in mainstream markets. But they have other features that a few (and generally new) customers value. Products based on disruptive technology are typically cheaper, simpler, smaller, and frequently more convenient to use (PXViii)
Christensen noted that some companies tend to offer customers more than they wish to pay for. This overkill opens opportunities for new entrants to enter with lower price and quality products and then improves their performance a manner that undermines the incumbent.
There are two important takeaways that can be derived from Christensen’s popular model of disruption. The first we credit to Thiel and Masters (2014: 56–57):
The act of creation is far more important… Indeed, if your company can be summed up by its opposition to already existing firms, it cannot be completely new.
Second, Christensen’s model of innovation is akin to Schumpeter’s, but it provides insights into the mechanisms of Schumpeter’s creative destruction. Christensen showed that incumbent firms often fail to respond to competition from new entrants with low priced or quality products because doing so would cannibalize existing revenue and profit streams. And whereas Arrow assumed impenetrable entry barriers, Christensen pointed out the soft “underbelly” of established firms because of the cognitive blind spots of the top management team. Incumbents are vulnerable because new entrants are not saddled with conventional managerial wisdom, established value networks, or existing technological performance trajectories to follow.15
Interestingly, the above commonplace models in the field of (technology) management appear to turn the standard model of static competition on its head. While established competition policy analysis tends to treat incumbency as a benefit, the (technology) management literature more often considers incumbency as a liability. The data on the fragility of market leadership support the latter view.
Many other “models” of innovation exist. For example, Jacobides et al. (2006) suggest that firms invest in innovation that is aimed at creating an “architectural” advantage. The whole point of the innovation is a form of dynamic competition to set the rules of the game. Firms innovate by shaping the industry architecture (defining complements, selecting participants, and organizing the rules of the game) in areas in which they are not active.16
At their core, most of these models embody a number of assumptions and propositions about dynamic competition. Many are rooted in an evolutionary theory of economic change. And most accept some version of a capability theory of economic change and a behavioral theory of the firm. As Schumpeter said, “in dealing with capitalism, you are dealing with an evolutionary process.”17 These models and others like them can no longer continue to be ignored. A more robust theory of the innovating firm is needed to guide competition economics and competition policy.
3. A theory of the innovating firm
3.1 Firm heterogeneity, dynamic capabilities, and competitive advantage in economic theory
The search for a functional relation between market structure and innovation that has traditionally motivated competition policy research has suffocated a broader enquiry. The result is that a substantial set of issues that matter is missing from competition policy frameworks. What is needed is a theory of the innovating firm that accepts that conduct and performance is impacted by heterogeneity, and in particular by firm-level differences in strategies, business models, organizational processes, ecosystem structures, and, of course, management. Vast libraries of research papers in the field of management show how important these factors are for innovation and competitive performance. They deserve to receive attention in competition analysis.
As noted, the Chicago School adopted the simple Marshallian textbook theory of the firm where output expansion, efficiencies, and price reductions all anchored in the specification of some type of production function or, in more abstract mathematical models of the firm, production sets. This framework assisted in answering many competition policy questions such as the definition of predatory prices, how efficiency tradeoffs might justify certain mergers, through to the impact of patent and trade secret royalty payments on output decisions. This theory also informed bargaining models and vertical integration/merger analysis. Unfortunately, post-Chicago work in competition economics has not gone much further with respect to building a theory of the innovating firm.
One would have hoped that more recent developments in the theory of the firm could have provided insight into firms as they exist today. Unfortunately, whether one uses the lens of transaction costs (e.g., Coase, 1937; Williamson, 1985), ownership perspectives (e.g., Hart and Moore, 1990), incentive perspectives (e.g., Holmström and Milgrom, 1994), or other “modern” theories of the firm, nicely summarized and illustrated by Roberts (2004), the model of the firm used today in competition economics remains insensitive to heterogeneity, treating each and every firm essentially as a black box, discounting the role of managers, complex contracting, and organizational arrangements that are so critical for dynamic competition. Accordingly, we must work toward a richer theory of the firm with more empirical content if we aspire to have a more relevant economic theory of the firm informing competition policy. Williamson (1999) himself, recognized that skills and foresight were not uniformly distributed. Quoting businessman Rudolf Spreckels’ statements whereby “Whenever I see something badly done, or not done at all, I see an opportunity to make a fortune”, Williamson commented “Those instincts, if widely operative, will influence the practice and ought to influence the theory of economic organization” (1089).
Unfortunately, most theories of the firm collapse to not much more than theories of the boundaries of the firm (Gibbons, 2005). They are not theories that accommodate firm heterogeneity. A notable exception is the dynamic capabilities framework of Lazonick, Rosenberg, Teece, Helfat, and others. Although it does not claim to be a theory of the firm, the dynamic capabilities framework is rich in insights about how firm heterogeneity, not market structure, explains differences in performance. The framework (Teece et al. 1997; Teece 2007, 2009, 2014, 2016) explains how dynamic capabilities enable firms in uncertain environments to secure a competitive advantage. Dynamic capabilities are high-level sensing, seizing, and transforming skills that enable a firm to identify, develop, market, and sell innovative products. Dynamic capabilities are firm level and firm specific (Teece, 2020). They must be built as they cannot be bought. The firm’s management decides, or not, to develop, maintain, and improve them.18 This distinguishes them from ordinary capabilities, which can be readily bought and taught—most often, they are the skills learned at business and engineering schools—in support of greater efficiency.
Now, just how and why some firms develop dynamic capabilities compared to others remains somewhat enigmatic. The micro-analytics of these decisions are not well explained by economic theory or by any other theory for that matter. Moreover, the dynamic capabilities framework has not yet been extensively employed to account for the success, and failure, of firms in the digital economy. An effort to remedy this situation is commenced in the next section.
3.2 Co-specialization, orchestration, and vertical integration in new markets
In dynamic capabilities theories of the innovating firm, “cospecialization” plays a central role. Assets, resources, and data that are cospecialized need to be employed together to create and capture value (Teece, 1980). In an innovating firm, the distinctive role of the management is to “orchestrate” cospecialized assets, resources, and data. Performed astutely and proactively, such orchestration can create and maintain a firm’s competitive advantage by (i) keeping cospecialized assets in value-creating alignment, (ii) identifying new cospecialized assets to be developed through the investment process, and (iii) divesting or running down cospecialized assets that no longer yield special value.
Often, in highly innovative industries, orchestration cannot be readily achieved through price-based contracting mechanisms because there may not be a competitive supply side that produces and sells the needed capabilities. Hence, when industries are new, it is often necessary for the developer/manufacturer to integrate upstream/downstream not for transaction cost reasons but for entrepreneurial and “capability” reasons.
Another reason that a firm faces hazards when relying on an external supplier for complementary innovation is the difficulty associated with accomplishing coordination of complementary assets and activities. This is related to what Richardson (1960) and Williamson (1975) have called “convergence of expectations.” Investment (in research and development) must be coordinated between upstream and downstream entities, and this is difficult to effectuate using contractual mechanisms.
When there is an asymmetry in capabilities between firms, achieving harmonization is difficult. Boeing discovered this to its cost when it decided to rely on a global array of suppliers to develop parts for its new 787 Dreamliner as a risk- and cost-sharing measure; some suppliers lacked the capabilities to develop parts of the necessary quality, and Boeing had cut back its monitoring capability.
Teece (1996, 2000) and Chesbrough and Teece (1996) have analyzed the difficulties in coordinating the development of complementary technologies when pursued independently and coordinated by contract.19 Delays are frequent and need not result from strategic manipulation; they may simply flow from uncertainty, limited capabilities, and divergent goals among the parties.20
In addition, orchestration might not be achievable by contract if property institutions have not emerged to enable opportunities for trade and economic exchange. In short, in highly innovative industries, capabilities must often be built as they cannot be bought. Integration is the rule, market-based transactions are the exception. These considerations are not featured in the path breaking scholarship of Ronald Coase, Armen Alchian, Harold Demsetz, or Oliver Williamson.
Orchestrations and (data) integration are particularly relevant to the digital economy. Today, cospecialized assets, resources, and data are the building blocks of digital firms (Teece, forthcoming). Building and assembling cospecialized assets and data inside the firm (rather than accessing them through a skein of contracts) is not done primarily to guard against opportunism and recontracting hazards. Instead, effective coordination of assets, resources, and data is important but difficult to achieve through the price system. Existing property institutions do not enable efficient opportunities for trade and economic exchange because information knowledge is a “fugitive resource” (Arrow, 1996) subject to low appropriability and intellectual property (“IP”) protection. For example, there may simply be no viable business model for licensing certain types of information and know-how. Taken together, these two factors suggest that special value accrues to achieving good asset alignment inside the firm in the digital economy.21
In some ways, but not in others, the dynamic capabilities interpretation of the innovating firm is consistent with a Coasian perspective. It conceptualizes the firm and markets as alternative modes of governance. However, the selection of what to organize (manage) internally versus via alliances (through ecosystems) or using the market depends on the availability and the nontradability of assets and capabilities; and to some extent on what Langlois (1992) has termed “dynamic transaction costs.”22
3.3 The way forward for a refined theory of firm-level competitive advantage in the digital economy
In order to refine our understanding of the source of competitive advantage in the digital economy, we propose the hypothesis that strong dynamic capabilities are a leading cause of the success and failures of Big Tech firms Stated differently, the strength of a firm’s dynamic capabilities explains the durability and diversification of some Big Tech firms and the stumbles of others.
To understand Big Tech firms’ dynamic capabilities, one needs to look closely inside the black box, so as to pay attention to internal assets, resources, and data. More particularly, the economic analyst should look for distinctive technical and human capital inputs. This requires models that do not treat technology as fungible and in which “management matters”.23
In the digital economy, we believe that dynamic competition requires firms to have superior capabilities to (i) astutely orchestrate big data, (ii) deploy artificial intelligence-driven operating models, (iii) act entrepreneurially, while (iv) eschewing bureaucratic decision making.
3.3.1 Big data and artificial intelligence
A key source of firm-specific competitive advantage for Big Tech firms (especially consumer-facing ones) comes from collecting data about user behavior, developing data structures and infrastructures, and leveraging them to develop new products, services and applications that to deliver increased value.24
These data assets of Big Tech firms are different from 1970s relational databases or classical business intelligence analytics. These data “lakes”, “warehouses”, and “meshes” are very complex, integrating multiple different sources and types of structured and semi-structured tabular data, for different use cases and with a different degree of centralization. The source of these data is increasingly connected devices such as phones and automobiles that are either owned by customers or are in some way observing their behaviors. Often, automated software systems relying on machine learning and “artificial intelligence” technologies are used to make sense of the data.
Now, because data come from many different sources and can be used in many different ways, it is often not possible ex-ante to know which sources and which uses will be valuable. The core issue here is some version of the classic joint product problem. When an enterprise produces a product involving fixed, or near-fixed, proportions (or what Leontief called “production processes”), it sometimes produces ancillary products/services that may have positive value, no value, or negative value (as with effluents).
Netflix, for example, saw customer data as an inevitable byproduct of managing its DVD rental business. However, that customer data over time formed the basis of its predictive algorithm that became its core competitive advantage. Likewise, Amazon initially developed, in the 2000 timeframe, cloud computing as an ancillary software support function for itself and third-party merchants (e.g., Target, Marks and Spencer). To more fully utilize the excess capacity it created to meet peak demand, it began providing services to others, and AWS is now a very profitable business.
The same data on customer behavior can in certain circumstances be reused. It is not an instantly depreciating asset.25 While one knows that an asset might be valuable, ascertaining its potential value (and whether it is worthwhile collecting and processing) is difficult because one may not know (ex-ante) how many times a piece of information can be reused. This means that judgment calls are required with respect to how much money to invest in collecting the data.
Google is an excellent example of a firm that was a successful early mover in understanding the value of collecting customer data from search and then linking it to advertising.26 When Google was launched 20 years ago, its key asset was its structured understanding of content on the Web; if one searched for something, there was a good chance Google could find it. At first, the customer data from this search activity were only weakly connected to the advertising industry. Google became a sub-contractor to Yahoo in 2000, the same year it launched its AdWords service (Hu, 2002). In the beginning, Google made modest profits. Now, because of its ability to connect the data27 assembled from customer search with the needs of a vast number of firms in many industries, Google has a very profitable business model. It has a finely detailed picture of online consumer activity, gleaned from all of the tracker cookies it puts on all of its advertisers’ websites, its knowledge of individual search history, and all of the data that Android phones send back. Google knows where device users are located, and what is located geographically proximate to the user. Google knows what websites a user just visited. Google reads Gmail emails. Google knows a lot and leverages these data with advertisers, encouraging advertisers to refine the way they present themselves, clarifying what they want and then making the match. This structure means that when users search, they can get results that are much more likely to be useful. And the ads they give users are much more likely to be relevant and clicked on. As a result, the profit margins of Google have risen, as the company has become a superstar.28
What is also true is that new product (and service) development and delivery involves coordination complexities and risks (negative spillovers) around software development and data mining. Data security is essential for this all to work well. It is also a technical challenge. Capabilities (both ordinary and dynamic) matter too. Few firms can do this well.
Creating and orchestrating digital assets so as to yield a value to ultimate users involve achieving convergence of expectations within the ecosystem, requiring managerial acumen in ecosystem management of a kind the price system cannot achieve by itself.29
3.3.2 Entrepreneurial (not bureaucratic) management
It should be self-evident that managing uncertainty and securing data-related advantages requires managerial skills that are deeply entrepreneurial. This skill constraint helps explain the competitive advantage puzzle confronted by digital firms. One simply cannot sustain the enterprise and grow it without a high quotient of what we call dynamic capabilities…which itself requires entrepreneurial management.
Schumpeter considered that entrepreneurs are the main agents of “creative destruction.” They are, as he powerfully wrote, the “pivot on which everything turn” (McCraw, 2007: 7). Entrepreneurs identify “new things or the doing of things that are already being done in a new way (innovation)” (Schumpeter, 1947: 151).30 They are not just inventors. Entrepreneurial activity involves innovation, organization and management.31
As enormous volumes of digitized information become available, it is most important to understand that the astute orchestration and management of data is critical to the long-term competitive superiority of digital firms. Because data are modular, weakly appropriable, and often a byproduct of observed economic behavior, the boundaries of digital industries keep changing with new entrants coming in that have found ways to better capture, store, sort, and orchestrate data using new business models or organizational arrangements.
If “data is the new oil” as the popular aphorism suggests, data are also the new Lego. The problem that data poses for businesses is quite practical. It is about appreciating how to analyze, organize, combine, and utilize it to identify and create new products, business models, and commercial opportunities. Myriad combinations are possible. To carry the metaphor further, in a world of digitalized information, digital firms are asked to build million piece Legos without instructions. Competitive advantage is shaped by the ability to imaginatively combine data science, technology, and business. One cannot conclude that passive control over large datasets allows a firm to live the quiet life, extracting supra-competitive profits akin to monopoly rents.32 Rather, orchestration of the data is critical and requires strong dynamic capabilities.
4. Understanding the policy relevance of Big Tech’s profitability
4.1 Nature of rents: Schumpeterian, Ricardian, and monopoly
Because there are large differences between business organizations, the nature of economic returns, profits, or “rents” can be quite different at the firm level. In assessing firm behavior, this needs to be understood.33 The nature of the rents earned by a firm ought to be relevant with respect to legal and policy analysis. Antitrust law challenges business conduct or transactions that lead to the acquisition, protection, or extension of monopoly power. If this means the ability to extract monopoly rents, then antitrust assessment ought to be different when it relates to the ability to extract other types of rents. Put differently, antitrust law should be able to separate the wheat of legitimate market power rents from the chaff of naked monopoly rents. An operational framework that distinguishes the nature of rents is relevant not just to inform antitrust liability but also in the context of antitrust damages should liability be found.
Economics have long recognized that high profits need not reflect monopoly rents. There are not only serious measurement problems associated with observing profits to infer monopoly power, but as Demsetz (1973) and Peltzman (1977) pointed out decades ago, superior profitability may reflect superior efficiency, including dynamic “efficiency” or innovation. Determining the sources of rents is thus of some importance (Teece and Coleman, 1998). In so doing, we need to draw the right conceptual distinctions. One of the weaknesses, and reasons, for the limited influence of dynamic competition in mainstream economics (and for the enduring prevalence of the neoclassical tradition) is perhaps the lack of definitional precision. The economics literature on rents unfortunately has failed to disaggregate the sources of “rents” very well, and it is therefore necessary to go back to basics. We do so by differentiating first between Ricardian, Schumpeterian, and monopoly rents.
Ricardian rents reflect returns to assets whose supply is fixed over a finite time horizon and are a function of permanent differences in the productivity or location of alternative assets difficult to expand competences. Winter (1995) talks of scarcity rents.
Schumpeterian (entrepreneurial) rents reflect returns arising from the introduction by entrepreneurs (and entrepreneurial businesses) of new combinations, improvements, or methods of production and are a function of the pace at which imitation can occur.
Monopoly rents reflect returns arising from restrictions on output placed on other firms. Technically, a naked monopoly rent is one that is extracted because there are limitations to entry and expansion. In other words, a naked monopoly rent is vulnerable to entry and expansion. This is the main difference between monopoly rents, on the one hand, and Ricardian and Schumpeterian rents, on the other hand. Ricardian and Schumpeterian rents persist even in the face of open entry. And this is the main idea behind the presumption that a firm exercises monopoly power when it raises prices by reducing output. A naked monopoly rent exists if and only if an established firm can restrict not just its own supply but also the supply of entrants.
4.2 Desirable vs undesirable rents
We can go further than the above definitions. Schumpeterian rents are generated because imitation does not occur instantaneously, even though imitators might well “swarm” around the innovators’ key technologies and products. A firm may develop products and process innovations and/or unique business routines (knowledge assets) but these tend to eventually be imitated by competitors. However, there may be a period of semi-permanent high returns enjoyed by the continuous development and orchestration of the knowledge assets and data in question. These returns are Schumpeterian rents even though sometimes slow imitation may allow great financial success for the innovator.
Schumpeterian rents are more of an evolutionary and transient phenomenon than Ricardian ones, because they disappear when improvements from innovation are diffused. That said, Ricardian rents are benign too. Ricardian rents do not necessarily result in lost output to society. As Winter notes, the earner of a Ricardian rent might produce the same level of output as “if the control of the constrained input were divided in numerous atomistic competitors” (Winter, 1995). It is in that sense that Ricardian rents do not cause consumer harm from an antitrust perspective. They do not arise from restrictions of output imposed upon other firms by the beneficiary of the rent or external actors like government. In reality, Ricardian and Schumpeterian rents are beneficial as they incentivize investment in innovation. Monopoly rents, by contrast, are the rents society does not want to see. Monopoly rents might arise because of (unnecessary) exclusionary conduct lacking efficiency or appropriability justifications.
We believe these distinctions are quite fundamental; yet to our knowledge, there is no limited literature in competition economics that recognizes them. Once again, this is perhaps because of the static nature of mainstream antitrust analysis, and its mostly perfunctory treatment of innovation. The distinctions we draw do exist in the economics literature, and they are of quite some importance in the field of strategic management. The welfare implications of each in the digital economy are quite different.
4.3 Rents and welfare
The so far unaddressed question is whether we can marry our analysis of rents to the consumer welfare criteria? We answer in the affirmative. Short-term consumer welfare is reduced when there are monopoly rents because barriers to entry or expansion prevent rents from falling. By contrast, long-term consumer welfare is not reduced when Ricardian and Schumpeterian rents are enjoyed because they would still exist absent something blocking entry or expansion.
A focus on long-run consumer welfare will protect/conduct that earns Schumpeterian rents and Ricardian rents, but it should not exonerate conduct that earns naked monopoly rents. On the other hand, a focus on (short term) consumer welfare will likely condemn efforts to generate Schumpeterian rents as well as Ricardian rents. Ricardian rents are acceptable because they may in fact stimulate innovation i.e., if lithium ion batteries are expensive because lithium is not ubiquitous and is controlled by a very small group of producers, high prices for lithium can stimulate R&D on alternative battery storage technologies. Table 1 endeavors to summarize our conclusions.
Source of rents . | Rents consistent with: Short-term consumer welfare . | Long-term consumer welfare . |
---|---|---|
Schumpeterian | X | √ |
Ricardian | Maybe | √ |
Naked monopoly | X | X |
Source of rents . | Rents consistent with: Short-term consumer welfare . | Long-term consumer welfare . |
---|---|---|
Schumpeterian | X | √ |
Ricardian | Maybe | √ |
Naked monopoly | X | X |
Source of rents . | Rents consistent with: Short-term consumer welfare . | Long-term consumer welfare . |
---|---|---|
Schumpeterian | X | √ |
Ricardian | Maybe | √ |
Naked monopoly | X | X |
Source of rents . | Rents consistent with: Short-term consumer welfare . | Long-term consumer welfare . |
---|---|---|
Schumpeterian | X | √ |
Ricardian | Maybe | √ |
Naked monopoly | X | X |
Does this imply that firms engaging in conduct that is generating Schumpeterian and/or Ricardian rents get a free pass? Possibly yes with respect to some challenged conduct. However, we need to recognize that large diversified firms may be hybrids and have some combination of two or three of these sources of rents in different businesses at the same time. Of course, dynamic capabilities are associated with generating Schumpeterian rents; ordinary capabilities and “resource-based” approaches may be associated with Ricardian rents.
The strategic management and dynamic capabilities literature are understandably lacking in social welfare criterion. Focused on business policy, not public policy, the strategic management and dynamic capabilities scholarship primarily seeks to equip entrepreneurs with the foundations upon which distinctive and difficult to replicate advantages can be built, maintained, and enhanced for long-term enterprise profitability and growth. But the dynamic capabilities literature is not devoid of welfare implications. Firms with strong dynamic capabilities seek to generate and capture Schumpeterian and Ricardian rents and can therefore pay better wages, retain, and retrain staff and build better capabilities in a virtuous cycle (Abowd et al., 2018). They are also more resilient and productive, providing a hotbed for innovative activity (Barth et al., 2016). And they invest higher amounts of R&D, leading to a high multiplier effect on social welfare. Surveys show that the social rate of return to private R&D are at least twice that of the private return (Hall et al., 2010). Without the firm-level capabilities to create, develop, and deploy technological change, capitalist economies cannot attain rapid rates of growth (Baumol, 2006; Metcalfe et al., 2006). Therefore, one can neither explain the wealth of firms, and in turn of nations, without a theory of capabilities (Sutton, 2012), nor can have a competition policy fit for our time.
While we support a long-term consumer welfare perspective as an interim, provisional arrangement, we also recognize merit in developing a broader (long-term) economic welfare approach. Making it operational is difficult. Williamson’s total (within market) welfare or surplus standard—which comprises consumer plus producer surplus—might be a starting point (Williamson, 1968). Naked monopoly rents would be a subtract.
The Williamson welfare standard is a partial equilibrium model. Impacts on markets other than the primary (or focal) market must also be taken into account. These are ubiquitous, and ambiguous, in the digital economy context. Moreover, beyond cross-market welfare gains and losses, a long-term consumer welfare standard would need to be extended to include time. The creation of entirely new markets and applications is also most important. Social returns to innovation in the form of knowledge and educational spillovers, not just business efficiencies, would need to be included. The Mansfield et al. (1977) study of the total social returns from innovation is a good example of exactly how the Williamson partial equilibrium model can be extended to look at upstream, downstream, and lateral impacts.
4.4 Policy insights drawn from the nature of rents
Several lessons can be drawn from the above discussion of profits or “economic rents.” First, innovation and dynamic capabilities are not viable in the absence of the financial returns necessary to draw forth continued investment in the ecosystem. Second, big is not bad, and dominance-based intervention thresholds are inappropriate. Competition between platforms tends to reduce winner take all outcomes. Moreover, industrial structure may be too fragmented, and most small firms cannot afford the R&D and professional management that is needed to develop commanding dynamic capabilities. Third, criteria are needed to distinguish (from a social welfare perspective) between “undesirable” profits (monopoly) and “desirable” (Schumpeterian and Ricardian) rents. For the reasons just explained, such performance indicators must complement market structure analysis. The underlying idea is that high-performing firms that invest despite deep uncertainty are unlikely to be the beneficiaries of naked monopoly rents. The best of all monopoly profits is indeed a “quiet life.” as Sir John Hicks said (Hicks, 1935). Firms with Keynesian “animal spirits” are not leading quiet lives. Fourth, antitrust law should not only prohibit conduct by virtue of its form. Platform-to-platform competition leads to openness, leading to less opportunity to capture value directly. The law should thus allow firms increased behavioral flexibility for monetization. Fifth, long-term ecosystem prosperity requires that platform leaders be adequately compensated.
In digital markets, firms often leave the question of how and when to appropriate rewards quite open, at least in the short term, as Facebook did when it acquired WhatsApp or Google when it bought Waze. They are not wedded to a particular strategy of appropriability and ready to defer the issue. The priority is growth, expansion, and scale, not profits, consistent with the Schumpeterian theory that economic agents that are goal driven may try out new ways of doing things that they deem promising (Nelson, 2012). This does not mean that digital firms disregard rewards, appropriability, or intellectual property. With the above in mind, the incentive design challenge is greater in digital markets, and there is a risk of an increased “ratchet effect” if regulators change the rules of the game in the interim (Freixas et al., 1985).
On other occasions, innovators may adopt a pricing rule which becomes a focal point for other pricing decisions. For example, since 2007, Apple has charged a standard 30% fee to developers and content producers selling through the App Store. EPIC and other companies like Spotify consider the 30% fee too high. Antitrust proceedings against Apple are well advanced. Epic is seeking an order that would force Apple to abandon App store rules restricting out-of-app payments. Ultimately, Epic wants to reduce the fees paid to Apple for App store distribution.
The central question to the antitrust case as we see it ought to be whether Apple is collecting a naked monopoly rent or whether it is capturing Ricardian or Schumpeterian rents. We believe there are two ways to determine this. First, an antitrust examiner must ask whether an entry of another competitor/supplier in such stores can be expected to put downward pressure on distribution fees. At first sight, this might appear to be the case, because Apple contractually and technically prevents other app stores from being installed on its operating system. At the same time, industry history provides different cues. When Google Android entered the industry with the Play Store, Apple did not need to reduce its fees. In this case, the persistence of rents could be an indicator of Ricardian or Schumpeterian ones, not naked monopoly ones. In addition, the number of apps distributed has kept growing so that usage of the constraint input (i.e., the App Store) looks more like the situation that would arise in a situation of value capture under conditions of atomistic competition. An better empirical test that would allow a finer evaluation of the nature of the app store rent would consists in asking what would happen to the 30% fee if Apple allowed the installation of a competing app store on iOS.
Second, an antitrust examiner can focus its attention on the existence of short-term obstacles to entry. To extract a naked monopoly rent in the long term, a firm must continually control competition for entry in the market by investment in isolating barriers. By contrast, firms that want to maintain Ricardian or Schumpeterian rents in the long term must invest in R&D, advertisement, and entrepreneurial activities. Disaggregated historical and current data on Apple’s investments in the app store might provide information on the nature and magnitude of its investments and, in turn, tell us whether it is a “snatcher” or a “sticker” (Hicks, 1954).34 Sir John Hicks made this distinction half a century ago (1954). Snatchers behave opportunistically, taking short-term gains.
5. Rules and standards for a dynamic competition policy
5.1 General
The dynamic competition paradigm we have advanced in this paper allows one to recognize certain principles. First, it should be self-evident that it will be harder for a firm stuck with just ordinary capabilities to catch up with rivals with strong dynamic capabilities. The upshot is that suspected conduct or merger and acquisition transactions from firms with weak dynamic and ordinary capabilities should be treated with high skepticism. In these cases, anticompetitive purpose or effects will be easier to infer, for firms without dynamic capabilities should know that they have little chance to catch up, and so might be more tempted to revert to anti-competitive conduct.35
Second, competition policy should be circumspect toward market transactions like M&A or technology transfer agreements promoted as a way to reduce a dynamic capability gap. By contrast, competition policy should be more tolerant of non-market investments in dynamic capabilities. The rationale from this rule stems from the scholarly presumption that most dynamic capabilities cannot be readily assembled through markets (Teece, 1982, 1986, 2007), and must be built organically, natively.36 Firms are highly individualized repositories of productive knowledge, much of which is tacit and therefore difficult to describe, trade, and absorb by M&A. In a merger transaction, competition courts and agencies should be skeptical about accepting an efficiency defense based on the absorption of one of the merging parties’ dynamic capabilities.
Third, and for a similar reason, diversification that builds upon or extends existing capabilities is about the only form of diversification that a capabilities-based competition policy should view as meritorious (Teece, 1980, 1982; Teece et al., 1997). By contrast, competition policy should adopt less permissive standards towards diversification in areas in which a firm has a low capabilities position.
Fourth, R&D, and skilled labor are firm-specific inputs that can underpin dynamic capabilities. Business conduct or transactions that are instrumental to such inputs should be dealt with under defendant friendly liability standards in competition inquiries.
Fifth, the time dimension is key to distinguish anticompetitive steps belonging to the gestalt of issues involved in assessing the likelihood of monopoly conduct—like alleged predatory pricing or anticompetitive tying—from procompetitive moves that are part of an iterative evolutionary path—like product repositioning or coalition-building strategies. Antitrust fact finders should attempt to distinguish if a firm’s impugned conduct or transaction manifests conduct designed to create, maintain, or increase short-term profits, or whether it pertains to a longer-term set of growth-minded sensing, seizing, or transforming activities. Put differently, the focus of analysis should seek to separate short-term conduct or transactions that make sense from an income statement perspective, from long-term behavior that makes sense from an innovation development one. To be sure, the distinction between the long and short term is a vexing one. Mainstream economics has abandoned decades ago the search for a practical screening tool. But we can nevertheless draw one concrete insight from the dynamic capabilities literature: facial examination of suspected restraints is injudicious in markets with deep uncertainty. Per se rules are error prone. A better mode of inquiry consists in evaluating business conduct and transactions under the rule of reason or at least to subject them to rebuttable presumptions of legality and illegality. We do so below with respect to two examples: acquisitions and self-preferencing.
5.1.1 “Killer” acquisitions
One narrative that has gone viral in policy circles is that Big Tech’s acquisition decisions are driven by efforts to suppress nascent competition. In a best-selling book on information technologies, law scholar Tim Wu claims that history supports the existence of an industry specific “Kronos effect”, whereby dominant companies consume their potential successors in their infancy (Wu, 2010).
The evidence on “killer” acquisitions suggests these transactions are low frequency events (Gautier and Lamesch, 2020). Admittedly, Big Tech firms record a high nominal number of M&A transactions. But the “killer” acquisitions narrative is essentially based on an analogy with patterns observed in the pharmaceutical industry. An empirical study found that 5.3%–7.4% of the acquisitions of pharmaceutical companies were killer acquisitions (Cunningham et al., 2020).
The reasoning about the social harms of killer acquisitions is also diverse. Some consider that killer acquisitions promote inefficient duplicative innovation. Given that the main target of killer acquisitions are startups developing efforts within the same market as the buyer, killer acquisitions incentivize the development of products that represent strong competitive threats which are a social waste (Cunningham et al., 2020). Others advance a symmetrically opposite argument. The prospect of being acquired and discontinued by a dominant firm scares investors in startups and venture capitalists who no longer fund innovation in the “kill zone” of Big Tech companies (Kamepalli et al., 2020). Some have also pointed out that Big Tech acquisitions might prevent alternative, more efficient mergers from taking place (Petit, 2020; Parker et al., this issue).
Last, one intuitively strong idea against Big Tech’s M&A is startups would give in too fast. Founders’ preference function favors short-term exits by sale to a Big Tech firm over long-term, stand-alone growth (Lemley and McCreary, 2020). Founders discount factor is high, due to a variety of factors. Systemic underpricing of IPOs is one of them. Taxation also plays a role. Big Tech incumbents’ perceived market power might be yet another factor.
At any rate, a concern about the low-frequency, high anticompetitive costs of M&A events in digital industries (Furman et al., 2019) nurtures widespread support to a more precautionary merger policy (Motta and Peitz, 2020). Today, many seems convinced that it was wrong to let Facebook acquire Instagram in 2012. But what is obvious in the present was far from clear in the past, under the guidance of established competition wisdom. We do, however, believe that a capabilities perspective would have triggered deeper analysis of relevant potential competition issues.
A hodgepodge of reform proposals are now on the table. Some recommend wider ex-ante reporting requirements for M&A transactions in digital industries. Others support the adoption of a presumptive rule against Big Tech M&A and a reversal of the burden of proof on large companies to show efficiencies (Caffarra et al., 2020). A stricter remedy consists in administering ex post breakups of consummated mergers and dominant firms (Kwoka and Valletti, this issue). Last, some have argued that an ex-ante obligation on gatekeeping platforms to allow third-party access to a user’s raw data upon that user’s request would curb the profits from anticompetitive M&A (Parker et al., this issue).
All proposed reforms share a belief that the social costs of lost competition due to Big Tech M&A are higher than the reduced incentives to innovation that would result from a merger rule limiting exit opportunities for entrepreneurs. We believe that what’s needed in a framework that assesses potential competition through a capabilities lens.
Competition fact finders should, perhaps, approach claims of merger efficiencies in digital industries with some skepticism. This should not be taken to mean that competition policy must subject all mergers in digital industries to a negative presumption. Indeed, even if one adheres to the view that organic growth dominates external growth by acquisition from an efficiency standpoint, this is not a realistic counterfactual for merger review. Survey evidence shows that three-fourth of successful VC backed startup exits occur by acquisition rather than by IPO (Gompers et al., 2016).
The dynamic capabilities framework suggests that some classes of acquisitions are less problematic than others. Three rules of thumb can be drawn from the literature. One, and quite counter-intuitively, the higher the degree of alignment between the merging firms, the greater the scope for efficiencies to redeem an otherwise anticompetitive merger. This is because the likelihood of successful dynamic capabilities absorption will be more important for firms who have already developed a “path of learning”, than for firms who have closed it. Two, absorption of dynamic capabilities is easier when the acquired firm is young. By contrast, older firms possess deep ingrained routines that are hardwired into the organization and difficult to transfer by acquisition or agreement. Three, the risk of reduced competition by acquisition is lower when the acquired firm is a nascent startup, because this is when its chances of survival are lowest. By contrast, the acquisition of a more mature firm, for example, one that has already exited by IPO, represents a greater competitive risk because its change of survival but for the acquisition are higher.
To see this concretely, take Google’s acquisition of Waze. Google had already sensed and seized the potential of turn-by-turn navigation apps as shown by its commitment to the development of Google Maps. The post-merger outcome was a predictable deepening of Google’s capabilities through divisional competition and cooperation between Waze and Maps. Now contrast this with Facebook’s USD 19 billion acquisition of WhatsApp. Facebook already had a messaging service, meaning a high potential for capabilities absorption. At the same time, WhatsApp already had a large installed base by the time of Facebook’s acquisition, suggesting a high survival potential. Moreover, WhatsApp enjoyed strong VC backing, where Facebook had completed a disappointing IPO.
In retrospect, the Facebook/WhatsApp merger looks like a success. WhatsApp grew again under Facebook. At the same time, given Facebook’s observable low capabilities, a reasonable theory should have been that WhatsApp would keep developing itself as a strong Facebook competitor (or in combination with a firm holding stronger dynamic capabilities). The result of a merger prohibition might have been a (now missed) cycle of Schumpeterian competition in personal social networks. Today, WhatsApp is yet to deliver revenues to Facebook. Years after its acquisition, no clear monetization model has emerged for WhatsApp. Together, these points hint that the acquisition might have driven by an intent to remove competitive capabilities from the market.
The strategy of a business is central to its performance, and therefore, the competition delivered to rivals. However, this is not yet in focus in mainstream economics. In merger review, agency analysts might want to know what the rationale is for the deal. Once provided, however, that information is usually set aside. There are of course good reasons to understand the motivation for a deal. But this information is practically ignored on the ground that economic studies have shown that the motivations of a deal often fail to materialize. This is a questionable policy. It does not follow from the fact that M&A fail to create value that mergers are not motivated by efficiency or that projections of post-merger efficiency gains are irrelevant in the managerial merger strategy. Besides, “efficiencies”, when occasionally considered, are construed too narrowly. They have little to do with organizational capabilities or innovation (Teece, 2020). To date, we still lack what Henry Manne called for when he regretted that the field had not developed “statistical methods for distinguishing mergers motivated by a quest for monopoly profit from those merely trying to establish more efficient management in poorly run companies” (Manne, 1965). Similarly, in his elegant study of the “welfare tradeoffs” from mergers, Williamson acknowledged the constraints of operationalization. Williamson’s model operates under an assumption that the demand and supply do not shift, leaving out of the picture the dynamic effects of “technological progress” and innovation on marginal costs and benefits (Williamson, 1968). Williamson did not ask whether and what would be the minimal shift upward in the demand curve that would more than offset any deadweight loss created by post-merger monopoly power.
5.1.2 Self-preferencing
Self-preferencing is a form of discrimination. In the standard case, a prominent platform that administers interactions between two or more user groups gives preferential treatment to its own applications, products or services in related areas. Prominent examples of alleged self-preferencing in the digital economy include Apple consistently favoring its apps by displaying them more prominently than similar apps in App Store search results and on the App Store home page (Kotapati et al., 2020); Microsoft using its dominance over Windows to give Internet Explorer a distribution advantage that other web browsers are unable to match (Buhr et al., 2010); or Netflix tweaking its algorithm and user interface to display favorably its own shows and reduce licensing costs to third-party suppliers of video content.
In the policy conversation, self-preferencing has been attacked on the ground that it places upstream (or downstream) rivals at the mercy of anticompetitive exclusion by vertically integrated platforms. Some antitrust scholars have also raised distributional and ethical concerns. Self-preferencing should be strictly prohibited because it denies living profits to suppliers, producers, and developers (Khan, 2017) and it undermines equality of opportunity.
To date, competition fact finders evaluate self-preferencing allegations through a leveraging framework (Crémer et al., 2019). The competition analysis seeks to determine whether a dominant firm has the ability and incentives to use its power in one market to exclude rivals in another market (Caffarra et al., 2020). In the 2017 Google Shopping case, the European Commission (“EC”) undertook a painstaking effects-based analysis to show that Google’s prominent display of its own comparison-shopping service (“CSS”) on general search pages, combined with the demotion of rival CSS websites, and had produced exclusionary effects. Given the resource-intensive nature of this inquiry and the pervasive practice of self-preferencing by digital platforms, several expert reports (often commissioned by competition agencies) have proposed to adopt a quasi per se prohibition rule against self-preferencing (Cappai and Colangelo, 2020). Remedies have also been discussed. The 2017 Google Shopping decision has not yet produced tangible restorative effects on market competition. Some competition experts now consider that a proper antitrust treatment of self-preferencing disputes requires stricter remedies like non-discrimination requirements (Khan, 2017), line of business restrictions, or divestitures (Khan, 2019).
What new light can a dynamic capabilities framework shed on the antitrust and regulatory evaluation self-preferencing? At first glance, dominant firms that favor their own product grab the low hanging profit opportunities, instead of investing in R&D to develop new products. While self-preferencing might thus belong to the set of acceptable strategies deployed by firms competing in predictable environments (as confirmed by competition policy-makers’ acceptance of self-preferencing by supermarkets, banks, and insurance companies), it does not look like the kind of behavior that generates great social benefits. To that extent, competition policy skepticism toward self-preferencing in digital markets may not be misplaced in all circumstances.
At the same time, self-preferencing has a strong developmental dimension. A dynamic capabilities-trained eye will instantly notice how self-preferencing fits within the panoply of sensing, seizing, and/or transforming activities. Letting suppliers and producers compete allows a platform to sense what product, service or application’s features best meet customers’ needs. During this phase of alpha or beta competition, the requirement of fostering entry and innovation implies that it makes a lot of sense for a platform to commit, by word or by deed, not to provoke certain complementors, and thus to refrain from self-preferencing. And a possible case might be made against a platform that opportunistically re-contracts during this phase. Put differently, platforms that make guarantees (to developers) of access should honor these commitments. That said, once the beta competition or alpha competition phase is over and has produced enough data points, the platform can seize a business opportunity, and transform it into a new, better value proposition by recourse to vertical integration. Self-preferencing is simply the epilogue of this process. The digital firm resorts to self-preferencing to weed out the bad designs and settle on a dominant one. This comes close to the point made by Jenny (this issue), whereby “some practices which may be regarded as unfair by complementors competing with the core platform (such as self-preferencing or the provision of services previously offered by complementors) may nevertheless be economically justified if they globally increase the value of the transactions or communications services offered by the ecosystem”. In this respect, it is also critical to note that self-preferencing might be less abrupt for suppliers, producers, and developers than alternatives like product deprecation, obsolescence, and retirement policies.37 In addition, self-preferencing often appears justifiable in industries with little ex-ante cooperation over product development, where standards emerge by trial and error imitation.38
Past antitrust cases provide ingredients to assess the policy relevance of a dynamic capabilities approach to self-preferencing. In Google Shopping, attention to dynamic capabilities would have led the fact finder to ask whether Google’s prominent positioning of its shopping service, displayed in rich format, marked the last step of a long-term process of evolutionary improvement of consumer welfare compared to blue links to CSS. As tech analyst Ben Thompson (2017) wrote about the EC’s decision: “if I search for a specific product, why would I not want to be shown that specific product? It frankly seems bizarre to argue that I would prefer to see links to shopping comparison sites.”
More important perhaps, shifting the focus of analysis towards dynamic capabilities might have elevated the antitrust fact finders’ confidence levels toward future supply trends. With the benefit of hindsight, we now know that Google has been championing a coalition-building effort to compete against Amazon’s entire integration of the online commerce stack.39 The latest piece of evidence about this is a 2020 announcement of a Google partnership with PayPal and Shopify to make it free for merchants to sell on Google and obtain free listings in Google Shopping search results.40 Again, from a long-term dynamic capabilities standpoint, self-preferencing might have looked like the prologue to rising levels of inter platform competition.41
To these remarks, we add one more. The dynamic capabilities literature suggests that self-preferencing will work better, and thus make more business sense, in adjacent products. This should guide the development of a competition rule that is more forgiving toward self-preferencing involving close substitutes and complements, and less hospitable toward self-preferencing involving non-functionally related products or services. To put the point graphically, competition policy should condone a general search engine that favors its own specialist search services (e.g., maps, jobs, flights, or real estate search). By contrast, competition policy should raise more objections towards a general search engine that favors unrelated services like data centers, cloud computing, or social networking.
5.2 Analytical tools, tests, and predictors for a dynamic competition paradigm
In this section, we sketch out tools relevant for analysis in digital industries. The likelihood of error by courts and agencies under the current orthodoxy is indeed high given their limited toolkit for understanding complex innovative environments. But it will also be high if they attempt to apply the dynamic competition framework as such. The dynamic competition and dynamic capabilities literatures are highly conceptual, and its implementation difficult. Capabilities are organizationally embedded and hard to measure. The concrete application of dynamic capabilities principles to competition policy development and execution thus requires additional effort to convert abstract scholarly ideas into operational tests. As part of this task, we are also reminded that it is good policy practice to strive for “information light” policies, that is policies that do not require information that cannot be made available to regulators (Tirole, 2014: 509). That said, there are some available easy first steps.
5.2.1 Market definition
As implemented in recent decades with the SSNIP test, market definition has tended to become price centric. Encouraged by economists, courts have favored this price centric approach. Both of us have tried to modify the price centric approach to innovation through the development of a revised tests where product performance attributes (reflecting innovation) or competitive pressure were also taken into account.42
While attempts to rescue the SSNIP test with the SSNIPP test (Pleatsikas and Teece, 2001a) were ignored by mainstream economics, we believe that this alternative approach still provides a useful way to define markets and will generally lead to markets being defined more broadly and therefore has a good chance of recognizing the broad-spectrum competition that is evident in the tech sectors of the US and European and Chinese economies. Of course, one might counterargue that such approaches are too forgiving of market power. Yet, they bring a useful correction to a widespread concern that market definition in the tech sector under the SSNIP test tends to be biased towards monopoly power.
Besides, it is also the case that the salience of ecosystems and complements likely changes market definition as well. This point is now recognized by many and is developed further by Jacobides and Lianos (this issue). Current approaches to market definition are simply not suited to the digital world where some ecosystems endeavor to attract and support complements, and their pricing reflects complex interdependencies not factored into the design of the SSNIP test. In ecosystem environments, competitiveness increases through the quality and quantity of complementors provided. The key exercise in market definition is to recognize where competition comes from and where it has the potential to come from in the future. The broad-spectrum competition mentioned earlier indicates that market boundaries should be drawn holistically to encompass multi product competition.
5.2.2 Potential (dynamic) competition
Structural analysis still matters in the digital economy. There is no suggestion here that competition policy-makers should completely abandon it. But no structural analysis of digital markets can ever be complete without both an analysis of the particular structures that really matter (e.g., ecosystems, markets, institutions). One also needs a proper account of the deep uncertainty that arises as a result of potential (dynamic) competition. This is important, because policy analysts maintain a proclivity towards assessing competition in digital markets by reliance on (market) share-based metrics, concentration ratios and Herfindhals.
In digital industries, products that are imperfect substitutes or complements compete against each other dynamically for user demand (Adner and Lieberman, 2021). Much anecdotal and empirical evidence shows that competitive pressure arises from non-substitute products, services, and business models that modify the relative preferences of users, raise the opportunity cost of present product consumption, and shift the demand curve for existing products inward. For example, users experienced lower relative utility from consumption of (i) desktop computers with the introduction of mobile phones; (ii) web browsers with the development of search engines; and (iii) comparison shopping websites with the growth of merchant platforms Unfortunately, conventional market definition methods that focus on actual (static) patterns of user substitution between rival products tend to discount that potential (dynamic) constraint.
A misplaced focus on static patterns of substitution has been clearly in display in the EC Google Android decision. Here, the EC held that Google did not compete with Apple in smartphone operating systems (“OS”) on the ground—amongst other things—that Apple’s iOS was not licensed to third-party OEMs. The EC market definition is inconsistent with historical evidence showing that Android entry stole smartphone users from Apple despite their distinct business models and with contemporary evidence suggesting that both ecosystems compete for users by product differentiation on choice variables like privacy (Petit, 2020). The EC market definition in Google Android also leads to curious implications such as the idea that a merger between Apple and Google in smartphone OS would be prima facie unproblematic, absent actual horizontal overlaps.
The problems of static market definition might be mitigated by a revamped doctrine of potential competition. We write “revamped” because the conventional assessment of potential competition determines whether firms located in other markets or industries have incentives to repurpose assets to compete deploying close-to-perfect substitute products with established firms. In digital industries, firms compete by indirect entry (Bresnahan and Greenstein, 1999; Petit, 2020). The dominant mode of competitive attack consists in supplying differentiated products (Pleatsikas and Teece, 2001a), complements, or “new combinations” (Schumpeter, 1934) (see area in Figure 1). In particular, competitive pressure might be exercised by products relying on different technological infrastructures or supported by distinct business models or supplied through specialized vendors. Head to head entry with very similar products is often difficult, or even completely unwise. Non-rival competition is the rule, not the exception. As Bill Hewlett (cofounder of Hewlett-Packard) told his employees: “attack the undefended hill, not the defended one.”

The reason for the greater ease of leveraging complements to produce competition than substitutes is easy enough to see. There are limited switching costs to complements on the user side. Users benefit from adding additional functionality to an existing product. By contrast, there are often switching costs to substitution on the user side due to the loss of sunk experience, learning, convenience, etc. (all the more when multi homing is not possible). A rational supplier thus quickly understands that there may be more short-term user surplus to extract from complements than substitutes.
Moreover, in the mid to long term, value can shift from the core product to the complement, as incremental improvements are introduced. A complement supplier can thus adopt a two-stage strategy that consists in breaking first the entry barrier of an ecosystem with a complement, and then attacking the insulating barrier that protects the core product. The end game may be one in which all the value is siphoned away from the core product. Accordingly, one should view ecosystem competition from a 360° perspective. There are a certain amount of rents. Competition is vertical, lateral, and horizontal. Competition is for rents, not users, per se. Through this lens complementors compete just as much as direct competitors, but along a different horizon.
With this in mind, the correct approach to entry analysis consists in putting more weight on Schumpeterian factors that keep nominal “monopolies” under competitive pressure. This has two consequences: one on market definition and other on potential competition predictors. To start, because technological competition requires a longer time period to unfold than price competition, the boundaries of the relevant market must comprise all entrants with a potential entry path over a 4 year period (compared to the existing 5%-1 year threshold used to assess substitution in supply and demand). Market definition is no more than a tool, a method. In Transamerica Computer Company Inc. v IBM, “A market definition should ‘recognize competition where, in fact, competition exists’, and should include all significant competition even though that competition differs in form or nature”.43 No reification of short term market definition as it is done today is thus warranted.
Second, potential competition should not focus on supply side substitution possibilities, but on technology “peers”. The inquiry should in particular focus on the magnitude of the technological capabilities of competitive peers, the disciplinary effects of the R&D programs of competitive peers even if new products are not yet in the market, and the magnitude of other competitive peer’s patient capital.
5.3 The law is further along than economic theory: doctrinal vs. operational issues
Interestingly, our approach does not require foundational changes in antitrust/competition law. (We note the relevance of Judge Hand’s thrust upon monopoly concept and his recognition of the importance of superior skill, foresight and industry.) The problem is more with the economics, less so the law. As Thomas Kuhn explained long ago, in science there is an inherent conservatism around paradigm shifts. We believe that a paradigm shift from static to a more wholesome dynamic competition is now required. The static model represents what Kuhn (Kuhn, 1962) calls “normal science” has outlived its usefulness and is now standing in the way of a deeper understanding of competition in digital economies.
The dynamic competition framework we propose is not a critique of the law but a guide to assessing the facts.44 In many ways, antitrust law, at least in the USA and increasingly in Europe, allows for the introduction of considerations that relate to the capabilities of the enterprise in evaluating Sections 1 and 2 cases as well as in merger review. We will advance the proposition that on a “go forward” basis, the focus should not be a market power, but on anticompetitive conduct, with a focus on naked exclusionary practices.
Statutory antitrust law says nothing about the goal of antitrust being to lower prices. Rather, the focus is on whether business behavior is anticompetitive. Merger law, for instance, does not focus on the impact on price. The existing law requires one to look at competitive effects. Hence, there is plenty of room in existing legal structures to bring in innovation, but too few choose to do so, often because economists and lawyers are rather awkward when analyzing it, despite its centrality to competition.
In Appalachian Coals, the US Supreme Court adopted a dynamic capabilities view of the competitive process when it absolved a combination of 137 coal producers from Section 1 liability on the ground that “The intelligent conduct of commerce through the acquisition of full information of all relevant facts may properly be sought by the cooperation of those engaged in trade, although stabilization of trade and more reasonable prices may be the result”.45 In United States v Grinnell Corp, the US Supreme Court held that the basic standard for monopolization under Section 2 requires proof of “the willful acquisition or maintenance of that power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident”.46 And in US v General Dynamics, the US Supreme Court recognized that static analysis by itself would not suffice in analyzing merger cases that involved markets that were not static.47 The Supreme Court indicated the necessity of looking at the trends in market concentration and associated new entry and exit. In Heinz/ Beechnut, the US Court of Appeals for the District of Columbia followed this prescription when it held that product innovation claims were implausible in the baby food industry “given the old- economy nature of the industry”, inviting a different antitrust treatment for dynamic markets.48
The European case-law has more slowly moved towards a recognition of the importance of dynamic issues. Yet, the trend towards a more innovation minded doctrine is unmistakable. In Microsoft/ Skype, the EU General Court relativized the relevance of market shares in the assessment of mergers in markets “characterized by short innovation cycles” like the fast- growing “consumer communications sector.”49 In CK Telecoms v Commission, the same Court held that a standard efficiency of mergers consists in the ability for the merged entity to “redeploy” staff.50 And in Magill and IMS Health the EU Courts embraced an innovation friendly ecology of competition, by maintaining strong protection from antitrust towards owners of intangible assets protected by IP, and at the same time tolerating a narrow opening of refusal to deal liability to protect dynamic competition with non-substitute products. In the Court’s view, a qualified duty to deal can be justified upon a showing that competition in the particular case does not arise by me-too imitation (Geradin, 2004).
Now, the case that most visibly broke with the structuralist static competition model is the US Supreme Court opinion in Verizon v. Trinko.51 Grengs (2006) notes that the case is mainly cited for its impact on the telecom sector but from our perspective, it opened the door for considering firm-level capabilities and dynamic competition.
Trinko’s positive view of heterogeneous resources is directly contrary to the Alcoa decision and similar cases. Nor does it sit at all well with structure-conduct-performance or S-C-P (and Porter’s (1980) 5 forces framework) which sees the intensity of competition being related to market structure, not the particular strategies and associated capabilities of particular firms. Grengs concludes that with Trinko, “the court firmly broke with the S-C-P, Chicago, and post-Chicago schools of microeconomic theory.” With Trinko the door has been wide open for the capabilities approach for some time. Grengs (2006: 106) is quite specific, noting:
Trinko also represents a profound change in the Supreme Court jurisprudence on microeconomic competition. Specifically, Trinko represents the first Supreme Court case to break directly with both the well-defended “structure-conduct-performance” and Chicago schools of economic analysis, as well as the vaguely defined “post-Chicago” school of micro economic analysis… the Supreme Court articulated a classical rivalrous process view of competition, as refined through the core insights of the “Resource-Advantage” theory of competition, consistent with the 1890 enactment of the Sherman Act. In doing so, the court rejected the neoclassical construct of “perfect competition” as a welfare ideal.
The US Supreme Court’s embracing of the resource-advantage theory is entirely consistent with the capabilities framework we advance in this paper. An innovating firm’s exercise of dynamic capabilities is, after all, a managerial model for achieving evolutionary marketplace fitness.
More generally, the Courts on both sides of the Atlantic increasingly reassess antitrust doctrine in light of progress in economic theory. In its 1992 Opinion in Eastman Kodak v Image Technical Services, Inc, et al, the Supreme Court actually “instructed lower courts to judge parties’ economic theories by how well they described ‘actual market behavior’.” There is no reason to believe it cannot adapt to a dynamic capabilities and dynamic competition perspective on the economics of digital markets. For example, in 2019, the Supreme Court came close to understanding the dynamic of ecosystems competition in Ohio, et al v American Express Company when it relied on the economic theory of multisided markets to raise the burden of proof on plaintiffs in vertical restraints cases involving transactions platforms like credit cards or online commerce markets. And in its 2017 judgment in Intel v Commission, the Court of Justice of the EU drew the correct conclusion of the consumer welfare paradigm when it held that dominant firms can lawfully exclude less efficient rivals.
In the US, the Amex case has been criticized has a caricature of multisided market theory (Melamed and Petit, 2019). But the idea underpinning the majority opinion in Amex makes perfect sense. Competition takes place across markets wider than just one side of a transaction platform. The very cold reception of Amex in US antitrust scholarship in reality betrays a concern about operational impotency and, by extension, of inflated enforcement costs. Short of tools to balance cross-market harms and efficiencies, a plausible risk is indeed that US courts will default to non-enforcement. the This, admittedly, would be an unfortunate outcome. But we should be encouraged by Amex, not discouraged. More than ever, time has come to search for practical ways to measure cross-market competition, not to “throw in the towel”.
6. Conclusion
A new competition economics paradigm is needed. At a time where fiercely competitive diversified firms represent a feature not a bug of the digital economy, a better diagnosis of the foundations of long-term competitive advantage can reduce the risks of type I and type II errors in competition cases and policy-making. Simplistic Chicagoan efficiency theorems and populistic antimonopoly narratives fall way short of the mark.
Economist and legal experts must develop tools and models that operationalize the idea that innovation drives competition as much as competition drives innovation. So far, the recognition that advancing dynamic competition and supporting innovation benefits consumers has been perfunctory in competition economics.
But there are reasons for optimism. The law, for a start, offers much leeway, and has already begun to move in this direction. Moreover, a large body of research in evolutionary economics, the behavioral theory of the firm, technology management, and strategic management has emerged which can be readily exploited. In this paper, we have tried to derive some insights from these studies in support of a framework, and a set of protocols, that can allow the incorporation of dynamic competition into the economics of competition policy. At the heart of our framework is a new conceptualization of the firm. We replace standard production theory with a Schumpeterian conception of the (innovating) firm which draws on the dynamic capabilities framework.
What we have written here is neither heretical nor polemical. Dynamic competition is a natural extension of traditional theories of competition. A trained competition eye will recognize the common thread between dynamic capabilities and policies against cartels that combat organizational structures limiting the uncertainty of competition or stabilizing strategic positions.52
Considerable hard work has been done and core foundations are in place. However, we recognize that we do not have all the answers. There is no handbook on dynamic competition available for consultation by competition courts and agencies. That is not a reason to pull back. The direction of travel toward an innovation centric competition policy is clear. We believe that if just a small group of the competition policy community would join us—and we would request a good measure of the more junior members—then we can quickly forge a path ahead.
Acknowledgments
The authors would like to thank Ron Adner, John Blair, Charles Baden-Fuller, Connie Helfat, Henry Kahwaty, Richard Nelson, Thibault Schrepel, George Soros, Giorgio Monti, David Stallibrass, Doug Melamed and Bo Heiden for trenchant comments, observations, and relevant discussions, Sam Palmisano, and Fernand Sarrat for important insights, and Charles Clarke, Sara Guidi and Natalia Moreno Belloso for research assistance. The special issue editors and several anonymous referees provided uncommon insights for which we are especially grateful.
Footnotes
To avoid useless repetition, we use “products” as a shorthand for “products and services”.
As we write this, we remind the reader of the old Coase’s (1972) remark whereby “if an economist finds something—a business practice of one sort or other—that he does not understand, he looks for a monopoly explanation”. Coase’s remark retains a ring of truth today.
Of course, one might add that falling prices can be sustained over monopoly if input prices are declining. But the evidence of falling input costs is mixed. Moreover, the past years have seen the addition of multiple costs on digital platforms arising from increased privacy protection obligations, as well as safety and cybersecurity risks.
Quoted in Rainey (2019), In recent years several international organizations have invested efforts into this enterprise. The OECD, for example, has embraced complexity theory to make sense of the dynamics of economic systems The focus has, however, been mostly on financial markets, and less on market competition.
We take issue with the fact that the endogeneity of competition/market structure is often omitted in the discussions of the welfare consequences of innovation/market power. This endogeneity means that there is a limit to what can be achieved through a focus on market structure, and a danger that competition policy that is focused on lowering concentration levels could actually harm innovation.
The Austrian School was founded by Menger (1871) in the 19th century.
Although Bork found perfect competition to be a “defective policy goal”, and advocated in favor of a productive efficiency friendly policy (High, 1984).
Quoted from Pleatsikas and Teece (2001a: 96).
Quoted in McDermott (2019).
See Soros (2013).
We note in particular important monographs by Aghion et al. (2021) and Gilbert (2020).
On the incumbency point, there is anecdotal evidence in the digital economy that firms, contrary to Arrow’s prediction, self-cannibalize with alacrity. One example is Netflix, when it moved from DVD by mail to online sales only.
One of us has tried to shed additional light on this issue (Teece, 1986, 2006, 2018). The “Profiting from Innovation” (PFI) framework shows that appropriability is not just a function of market structure. Complementary assets, timing, and the appropriability regime itself play a significant role. These factors outside of the Arrow-Schumpeter framework led Winter (2006) to note that the: “analysis of the innovators access to complementary assets, undertaken from a contracting perspective, can be seen as filling a significant gap in the previously theoretical discussion of appropriability.”
Schumpeter put more emphasis than Christensen does on the explosive power of entrepreneurialism. New technological phenomena are born of dynamic innovation-driven competition. As we have emphasized competition is fueled not by static price-based rivalry, even though price for unit of performance may be lower even with no innovation (due to efficiency). Thus a lightbulb was more expensive than a candle, but even the first lightbulbs were orders of magnitude brighter (10× candlepower) than the candles they replaced.
Sometimes, incumbents can actually set the rules of the game by investing in predatory innovation (Schrepel, 2018), that is the “alteration of one or more technical elements of a product to limit or eliminate competition”.
See Schumpeter (1942: 82).
We are not suggesting that this is an explicit board action item. However, the culture, tone, and ownership a firm chooses has strong implications for dynamic capabilities.
These dynamic coordination issues are very different from the rent extraction of concern in the economics literature on innovation. In Farrell and Katz (2000), for example, a monopolist may extract so much rent from the firms selling a competitively supplied complement that their innovation is suboptimal even from the monopolist’s perspective.
The firm MIPS Computer Systems encountered this with its failed attempt to promote their Advanced Computing Environment (ACE) to compete with Sun’s Scalable Processor Architecture (SPARC). MIPS set up alliances with Compaq, DEC, Silicon Graphics, and other firms to pursue a RISC-based computing standard. However, soon after DEC and Compaq announced that they were going to reduce their commitment to ACE, the alliance fell apart because MIPS could not pick up the slack in some of the upstream activities. It failed both to develop competencies in key aspects of the technology and to create a common expectation for the alliance (Gomes-Casseres, 1994).
Achieving such alignment through internalization goes beyond what Barnard (1938) has suggested as the functions of the executive—which he sees in achieving cooperative adaptation.
Langlois (1992) defines dynamic transaction costs as “the costs of persuading, negotiating, coordinating and teaching outside suppliers” (113).
See Bloom et al. (2007).
This section draws in part from Baden-Fuller et al. (forthcoming).
Some of these issues were explained in Teece (1980).
Google is also an excellent example of vertical and lateral integration. Much of Google’s ad revenue has long involved traffic acquisition cost (TAC; i.e., Google pays a revenue share to Google network members that run a Google search product on their websites or devices. In 2011, TAC represented 51% of Google’s AdSense advertising fees. Google has therefore spent the past decade trying to decrease payments of the revenue share, end AdSense relationships, and integrate verticals served by Google network members.
In the dynamic capabilities approach, this is called “asset orchestration” and is part of “seizing” (Teece, 2007).
Google is an interesting organization, because it builds its own software and owns its own storage and even designs its own chips. Whether these last moves are necessary is an issue to be probed.
Facebook is discovering this currently (July 2020). For several years, it has dismissed concerns over how its algorithms, designed to maximize user engagement, spread hate speech and misinformation. It now faces a (time limited) boycott by major advertisers, with hundreds more considering such a move (Bond, 2020).
Schumpeter rightly added that the new thing “need not be spectacular or of historic importance”.
Economic evidence confirms that entrepreneurs play an outsized role in the fortunes of digital firms Empirical economic studies have found that “many of the canonical superstar firms such as Google and Facebook employ relatively few workers compared with their market capitalization, underscoring that their market value is based on intellectual property and a cadre of highly skilled workers” (Autor et al., 2020). At a finer level, Athey and Luca discuss evidence of large numbers of PhD economists joining the ranks of tech companies to work on business problems like working with data; assessing and interpreting empirical relationships; understanding and designing markets and incentives; and reading the environment and strategic interactions (Athey and Luca, 2019). If anything, job market trends suggest that digital firms’ ability to hire talent and orchestrate it is key to survival in rapidly changing marketplaces.
All the more given that complexity arises from the fact that that limited datasets might also be sufficient for small firms to enter in some applications markets. See Gómez-Losada and Duch-Brown (2019).
Michael Porter has developed a theory of strategy around conduct designed to impair competition. As Porter (1981: 612) notes “public policy makers could use their knowledge of the sources of entry barriers to lower them, whereas business strategists could use theirs to raise barriers.”
Hicks called a company that takes a quick profit a Snatcher; a company trying to develop a steady business was a Sticker.
When firms are weak, there is a tendency to collude. The railroad cartels in the US in the depression years of the 1930s are a case in point.
One underappreciated point of the dynamic capabilities literature is that it considers that the bulk of activities that make up an organization’s dynamic capabilities should not be outsourced, and should not be absorbed. This is due to imperfect factor markets, the non-tradability of soft assets like value, culture and organizational experience, and distinctive competences.
See e.g., Yegge (2020).
One similarity with industries where heavy ex-ante cooperation takes place (for example, through standard setting organizations) is however that this process creates disputes as to if and under what conditions, those who brought initial supply (and demand) to the platform can remain participants in the ecosystem.
Id. Ben Thompson has talked of an “anti-Amazon alliance.”
See Thompson (2020).
The existence of Amazon and its clear clout in the market rather strongly suggests the European Commission missed the point: market control comes from aggregating customers; Google can’t anymore restrict competition from sites that depend on Google than a car can restrict competition from a trailer it is towing. Winning online is not about functionality, but about what app or website customers open of their own volition. In the case of shopping, that website is increasingly Amazon, and now it is Google that is partnering with others in response.
Transamerica Computer Company Inc. v IBM, 481 F. Supp. 965.
We thank Doug Melamed for this trenchant observation.
288 U.S. 344 (1933).
384 U.S. 563, 570–571 (1966).
415 U.S. 486 (1974).
246 F.3d 708, 722 (D.C. Cir. 2001).
General Court (Fourth Chamber) Case T-79/12 (December 11, 2013).
General Court Case T-399/16 (May 28, 2020).
540 U.S. 398 (2004).
C-7/95 P, John. Deere Ltd v Commission, ECLI:EU:C:1998:256 at 88 and 90.
References
(