Abstract

The AI Act is poised to become a pillar of modern competition law. The present article seeks to offer those interested in competition law with a critical guide to its key provisions. It also discusses the AI Act’s effect on innovation and competition within the EU single market.

I. INTRODUCTION

Europe is experiencing a legislative frenzy. In recent months, European institutions have adopted the Digital Markets Act (“DMA”), the Digital Services Act (“DSA”), the Data Act, the Data Governance Act, and the Artificial Intelligence Act (“AI Act”). They run hundreds of pages, making the navigation of the new regulatory landscape a real craft.

Faced with regulatory complexity, those interested in competition law may find comfort in the notion that possessing a great command of the DMA alongside familiarity with other relevant statutes is sufficient. It is not. A careful examination of the AI Act reveals the necessity for competition experts to acquire in-depth knowledge of the new rules on artificial intelligence. There are two reasons for this.

First, the AI Act has significant implications for competition law (II.). It provides supervisory agencies with new procedural powers that transfer to competition agencies. The AI Act is also transforming computational antitrust, and alter the approach courts and agencies may take towards analyzing competition infringements. Second, the AI Act is expected to have an impact on competitive dynamics (III.). The scope of the AI Act is broad and extendable, indicating that its importance will continue to grow in the coming years. Article 2 explicitly states that the AI Act applies to all organizations involved in the introduction or deployment of AI systems within the Union, users of AI systems located within the Union, and providers or users of AI systems situated in third countries, provided that the generated output is used within the Union.1 Given the digitalization of the economy, one can expect a large number of companies to rely on or offer products and services incorporating AI systems.2

Against this background, the present article endeavors to provide those interested in antitrust with a critical guide of the AI Act. The article aims to bridge an existing gap in literature by merging a “law and economics” analysis with a “law and technology” approach to underscore crucial considerations of the AI Act for competition (law).

II. IMPLICATIONS FOR COMPETITION LAW

The AI Act was drafted with the intention to apply “without prejudice to the application of Union competition law.”3 However, despite this intention, the AI Act affects competition law in three aspects. First, it has implications for the procedural powers of antitrust agencies, as it significantly extends their investigative authority (A.). Second, the AI Act is likely to slow down the development of computational antitrust (B.). And third, it has considerable implications for the analysis of anti-competitive practices that it may facilitate (C.).

A. Extended Investigative Powers

The AI Act indirectly extends the investigative powers of competition agencies in Europe. This finding has so far been largely ignored because it is buried in the fine print of the AI Act. No less radical are the changes the AI Act brings to antitrust investigations.

The logic is this: the AI Act requires each Member State to establish or designate as national competent authorities at least one notifying authority and one market surveillance authority to implement the Regulation.4 Notifying authorities will handle the processes for evaluating, appointing, and notifying conformity assessment bodies.5 Market surveillance authorities will focus on ensuring compliance and enforcing the AI Act’s provisions.

In order to ensure their mission, the AI Act grants market surveillance authorities complete access to documentation and the training, validation, and testing datasets used in developing high-risk AI systems. This access may also involve, when required and with adequate security protections, the use of tools such as application programming interfaces (API) or other suitable technical methods for remote access.6 Taking it further, these authorities are permitted to access the source code of high-risk AI systems upon a justified request, provided that (a) such access is essential to evaluate compliance with the requirements outlined in Chapter III, Section 2, and (b) testing, auditing, or verification processes based on the data and documentation supplied by the provider “have exhausted or proved insufficient.”7

This is where competition agencies come in. As outlined in Article 74 of the AI Act, market surveillance authorities are required to provide annual reports to the Commission and relevant national competition agencies, including “any information” identified during their activities that “may be of potential interest” for enforcing EU competition law.8 This reporting duty is unidirectional in that it only gives market surveillance authorities the power to send information to competition agencies without them asking for it. It has also been toned down. The proposals from the European Commission and Parliament required reporting “without delay” instead of “annually.”9 Still, Article 74 of the AI Act changes the face of competition investigations as market surveillance authorities enjoy investigative powers that complement those of competition agencies in two respects.10

First, the AI Act changes the conditions under which competition agencies can access information (i.e., the when). These agencies are typically limited to requesting information when they suspect a breach of competition law.11 For example, Article 18 of the Regulation 1/2003 gives the European Commission the power to request all information necessary “[i]n order to carry out the duties assigned to it by this Regulation.”12 The same applies to national competition agencies as Article 8 of the ECN+ Directive grants them the power to request any information deemed necessary for enforcing Articles 101 and 102 TFEU within a defined and reasonable timeframe.13 It also specifies that such requests must be proportionate and cannot force recipients to admit to violations of Articles 101 and 102 TFEU.14 In practice, the Commission has the option to issue simple information requests or mandate undertakings to provide information through formal decisions. Entities subject to a formal decision are required to send the Commission all requested information, unlike with simple information requests. The Commission must ensure that its decision to issue either a simple information request or a formal decision is proportionate, as emphasized by the Court of Justice.15

The same logic applies to the DMA. Under Article 21 of the DMA, the Commission is authorized to conduct on-site inspections of information, including “any data and algorithms,” to meet its responsibilities under the regulation. This means that the European Commission can only request information or access to data and algorithms if it suspects an infringement of the DMA, as highlighted in Article 26(1) of the DMA.16

Against this background, the AI Act extends the conditions under which the national competition agencies and the European Commission can access information. By giving market surveillance authorities power to proceed with compliance checks and send competition agencies all relevant information, the AI Act indirectly creates an effective mechanism to monitor the compliance of all companies with competition law, regardless of any suspicion of anti-competitive behavior or infringements to the DMA. Undoubtedly, competition agencies will use this information to launch investigations and become more proactive.17

Second, the AI Act changes the conditions for competition agencies to access sensible information (i.e., the what). Not only competition agencies must be investigating a possible infringement, but as it stands, the Court of Justice also requires the Commission to indicate the subject of its investigation in the request for information, and explain how the information requested relates to that possible infringement.18 For example, this means that the Commission cannot request access to a company’s algorithm simply because it is investigating a possible practice. The Commission must justify why it specifically needs access to the algorithm. If the Commission fails to do so, companies can simply refuse to give access to their algorithms, training sets, source code, etc.19

All in all, the AI Act has a great impact on procedural antitrust. Not only does it allow competition agencies to access information without investigating a violation, but the AI Act also gives competition agencies access to sensitive information that they do not have systematic access to, even when they suspect a potential violation of competition law.

B. The New Face of Computational Antitrust

Computational antitrust, a field of research that explores how legal informatics can facilitate the automation of antitrust procedures and improve antitrust analysis, is gaining traction.20 Agencies around the world are creating data analytics teams, hiring computer scientists, and implementing various tools such as machine learning for cartel detection, natural language processing, network analysis, etc.21 Artificial intelligence is taking hold in most of these endeavors. As a result, competition agencies’ detection and enforcement capabilities are growing to the point where ex-post enforcement is becoming faster and more effective. But for computational antitrust to expand, rules and standards must govern the use (and misuse) of computational tools while maintaining incentives for development.22 The AI Act seeks to achieve such a delicate balance.

Recital 59 of AI Act gives a first impression of the provisions for the use of AI by law enforcement authorities. It holds that such AI systems should be classified as “high-risk” where “accuracy, reliability and transparency” critical to avoiding harmful outcomes, preserving public confidence, and ensuring accountability along with effective means of redress.23 On that basis, Annex III of the AI Act calls “high-risk” the “AI systems intended to be used by or on behalf of law enforcement authorities (…) to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences.”24 Regarding the use of AI by judicial authorities, recital 61 of the AI Act identifies as “high-risk” those systems designed “to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”25 The only exception applies to administrative activities that have no impact on the handling of justice in individual cases. This exception includes tasks like anonymizing or pseudonymizing judicial documents, decisions, or data, facilitating communication among staff, and performing general administrative functions.26 In short, the entire chain of law enforcement is described as “high-risk,” from the use of AI by enforcers to its judicial review. This qualification comes with many requirements, as detailed in Chapter 3 of the AI Act. But how does it concern competition agencies?

Article 3(45b) of the AI Act defines a law enforcement authority as an entity authorized by a Member State to use public powers for the prevention, investigation, detection, or prosecution of criminal offenses and the enforcement of criminal penalties.27 This means that the use of AI by competition agencies to detect criminal offenses is called “high-risk.” This raises the question of whether—and which—competition infringements are criminal. In the absence of a European consensus on this issue, each national competition agency has to assess whether the practice it is investigating is criminal in nature. For example, Denmark, France, Greece, Ireland, Slovakia and Slovenia impose criminal sanctions for hardcore cartels.28 Austria, Belgium, Finland, Germany, Hungary, Italy, Poland, and Portugal impose criminal sanctions for bid-rigging.29 These agencies are required to comply with the AI Act’s requirements for high-risk systems when using AI to detect these practices. But not for others. This is likely to raise interesting technical and organizational issues. This will also discourage the use of AI to enforce and analyze criminal practices, which are often the most harmful.

C. AI-Act Based Practices

The AI Act aims to increase the transparency of AI systems. To this end, several provisions require the sharing of meaningful information between companies, although information sharing comes at the cost of facilitating collusive behavior and targeted abuse of dominance.

Article 19 of the AI Act holds that providers of high-risk AI systems must keep the “logs” (i.e., a recorded sequence of events) mentioned in Article 12(1).30 These logs contain a variety of information. They typically include system events (e.g., shutdown events), algorithmic operations (e.g., input data, model training), predictions and decisions (e.g., decisions or actions taken based on predictions), performance metrics (e.g., accuracy, precision, recall, F1 score,), data handling (e.g., information about data ingestion and processing), resource utilization (e.g., CPU, GPU, memory), user interactions (e.g., user interactions with the AI system), timestamps (e.g., time information for each logged event), external system interactions (e.g., logs of interactions with external systems or APIs), etc.

The information contained in logs can be sensible from a competition law perspective, as it allows the AI system provider to better understand the business practices and strategies of its users (what other software they use, when do they use it, what decisions they have made using AI, etc.). Collusion can ensue. Targeted abuse of dominance can also ensue. In short, Article 19 of the AI Act could lead to cases similar to the one opened by the European Commission against Amazon for its use of marketplace seller’s data.31

Articles 16, 23, 24 and 25 of the AI Act further promote transparency requirements among market participants and thus create a similar risk of fostering anti-competitive behaviors. Specifically, Article 16 mandates that AI system providers adhere to an extensive list of provisions which cover all the requirements for high-risk AI systems listed in Section 2, the implementation of a “quality management system,” and the keeping of technical documentation and logs, among other obligations.32

Proving compliance with Article 16 will require access to sensible information from a competition law perspective. This becomes problematic considering Articles 23, 24, and 25.

Article 23 of the AI Act requires importers to refrain from placing AI systems on the market if they have “sufficient reason” to suspect that a high-risk AI system fails to meet the requirements of this Regulation, has been tampered with, or is accompanied by counterfeit documentation.33 Distributors are subject to the same obligations, as specified in Article 24, which requires them to verify that both the provider and importer of the AI system have met their respective obligations.34 And Article 25 to specify that any distributor, importer, deployer, or other third party will be considered a provider of a high-risk AI system under any of the following circumstances: (a) they affix their name or trademark on an existing high-risk AI system on the market; (b) they make substantial modifications to an existing high-risk AI system; or (c) they alter the intended purpose of an AI system that was not initially classified as high-risk, leading it to meet the high-risk criteria outlined in Article 6.35

In other words, providers, distributors, importers, deployers or other third parties intending to market an AI system will first need access to sensitive information held by the provider, such as the functioning of the AI system, its training set, logs, and so on. There is a non-negligible risk that the sharing of this information raises issues related to trade secrets, but also create incentives for collusive or abusive behavior. The level of transparency between companies imposed by the AI Act could indeed inadvertently expose commercially sensitive information. It could give market players a detailed understanding of each other’s strategies, capabilities, and operational choices. The sharing of logs could also allow companies to monitor each other’s behaviors and align their actions. For example, logs containing timestamps or external system interactions might reveal patterns in pricing, resource allocation, or market responses. Logs that record predictions, decisions, or interactions with users or external systems could provide clues about pricing algorithms or market strategies.

With this in mind, one might ask whether EU institutions’ decision to prioritize safety over competition in these articles could affect the assessment of related anti-competitive practices. Consider the risk of collusion. Article 101 TFUE applies when the sharing of commercially sensitive information is likely to impact the business strategies of competitors.36 But the Guidelines on the applicability of Article 101 to horizontal co-operation agreements holds that information that is typically not considered commercially sensitive includes non-confidential technical matters relevant to the broader industry, such as standards or issues related to health and safety.37 By forcing the sharing of AI logs, technical documentation and instruction of use, Articles 16, 23, 24, and 25 of the AI Act call this information necessary for safety matters. It could then be argued that sharing related information should not raise competition issues, or, in other words, that competition concerns are outweighed by the increase in safety. Competition agencies will have to consider this argument. The European Artificial Intelligence Board, which aims to collaborate with various EU institutions, bodies, agencies, and relevant expert groups—particularly in areas like product safety, cybersecurity, and competition38—will be an appropriate place to discuss the ramifications of the security/competition trade-off, and whether the “collusive” like behavior initiated thanks to the AI Act should be exempted.

III. IMPACTS ON COMPETITIVE DYNAMICS

The AI Act affects not only competition law but also the dynamics of competition. The European Commission has emphasized its desire to improve “access to the entire single market”39 by preventing the fragmentation of the AI market “with individual Member States taking unilateral actions.”40 But while the AI Act is on track to prevent fragmentation, it may still distort the internal market (A.) and reduce access to the single market as a whole (B.). These issues are condensed in the last-minute provisions on general-purpose AI models (C.). What’s more, the AI Act’s lack of flexibility could leave these distortions uncorrected (D.).

A. Distortion within the Single Market

There are two reasons why the AI Act may distort competition within the single market. First, the AI Act’s tech neutrality approach is not neutral in practice. Second, the AI Act distributes the compliance burden unevenly.

In defining the scope of the AI Act, the European Commission sought a definition of an AI system that was “as technology neutral (...) as possible.”41 This allegedly came from a good place as the Commission received “many comments underlining the importance of a technology neutral” regulation.42 The concept of neutrality refers to the desire to avoid favoring or discriminating against particular technologies. And indeed, one would not want for the European Commission to dictate how AI should be designed.

The AI Act nonetheless fails to be neutral. By refraining to discriminate between AI systems based on their functioning, the AI Act indirectly sanctions the systems that are safer and easier to control. The logic is this: AI systems that behave predictably (i.e., deterministic AI systems with low or no randomness)43 do not pose the same risks as more “creative” or unpredictable AI systems (i.e., nondeterministic AI systems with high randomness).44 When they are regulated the same, companies that have already mitigated risks through the design of their AI system face the additional burden of also complying with the AI Act’s strictest provisions. This explains why the use case approach adopted in the AI Act (i.e., imposing different burdens depending on where AI is being deployed) is not neutral. Neutrality requires imposing different regulatory burdens on different designs.

Considering this discussion, one can only regret that European institutions chose not to combine their use case approach with a more technical approach. While the use case approach has some merits—AI applications in health, education, and recruitment are inherently riskier than in making wooden spoons—combining it with a technical approach would have been more effective. This combination would have imposed the highest compliance requirements only on AI systems used in high-risk sectors that are also nondeterministic. Conversely, systems used in high-risk sectors but with highly deterministic outputs would have faced lower compliance requirements, reflecting their lower degree of actual risk.

Another potential distortion to the single market comes from the uneven distribution of the regulatory burden. GDPR is a good example of a past regulation with a poorly distributed regulatory burden. By imposing the same rules on all companies, regardless of the size of their user base, GDPR favored large companies that were able to ensure compliance at a proportionally lower cost than small companies.45 It then seems that EU policymakers learned their lesson when they implemented the Digital Services Act, which, unlike the GDPR, imposes increasingly stringent rules as the company acquires more users. But the AI Act falls into the same trap as GDPR. Considering the growing consensus on the competitive impact of GDPR, there are reasons for concerns. Consider this: GDPR increased the cost of data storage by ~20 percent. As a result, two years after its implementation, EU firms were storing, on average, 26 percent less data than their US counterparts. The level of computation by EU firms also decreased by ~15 percent.46 Because of GDPR, EU firms became less data-intensive in an increasingly digital world.

The European Commission justifies this approach on page 9 of its proposal, stressing that high-risk AI systems require stringent standards for data quality, documentation, traceability, transparency, human oversight, accuracy, and robustness. These measures are considered critical to addressing risks to fundamental rights and safety that are not adequately managed by current legal frameworks.47 Because these rules are considered “strictly necessary,” the Commission suggests that compliance cannot be limited to certain companies. But the logic is challenged by the exemptions set up in Article 2 of the AI Act, such as the ones for AI systems used exclusively for military, defense or national security purposes.48 If anything, these exemptions indicate that the AI Act could have perfectly incorporated more gradual provisions.

Let us consider the implications of regulating all high-risk systems the same regardless of their functioning by using Article 11 as an example. Article 11 of the AI Act requires that all companies develop technical documentation for a high-risk AI system before it is introduced to the market or deployed, and to keep it up-to date.49 The documentation must be organized to show that the system meets the requirements specified in the section relative to high-risk AI systems, and to supply national authorities and notified bodies with detailed and clear information needed to assess the system’s compliance with these standards.50

First and foremost, the technical documentation must contain the information listed in Annex IV. Details should include how the AI system interfaces with or can be used in conjunction with external hardware or software, including other AI systems. The documentation must also provide a description of the hardware intended for the system’s operation and an outline of the processes and methodologies used during the system’s development. This includes any use of pre-trained models or third-party tools, along with an explanation of how these were applied, integrated, or modified by the provider.51 Expertise in data science and computer science is crucial for this process.

The technical documentation must also include the information necessary to assess whether the functioning of the AI system complies with several of EU fundamental rights covered in Section 2 of the AI Act.52 In particular, Section 2 covers the right to freedom of expression (Article 11), freedom of assembly (Article 12), and freedom of art and science (Article 13). A legal expert will be needed to ensure the technical documentation contains the required information to evaluate compliance with these fundamental rights, such as described in the European Convention on Human Rights and the jurisprudence of the European Court of Human Rights.

In summary, employees with both technical and legal expertise are needed to prepare the documentation required by Article 11 of the AI Act. For comparison, it took nearly 800 Microsoft employees and a decade to compile the communication protocols enabling interactions between non-Microsoft servers and Windows clients, as mandated by the D.C. Circuit.53 Coming back to the AI Act, it is difficult to see how a startup with just a couple employees could prepare the documentation without cutting on critical expenses and falling behind the competition. To put it bluntly, the AI Act is set to distort the single market in favor of those companies that can afford to comply without compromising their ability to innovate.

In response, we suggested making a distinction between companies based on their number of users, and to impose the most stringent requirements only on the most successful companies.54 The final version of Article 11 took a step in this direction by introducing an exception for small and medium enterprises (“SMEs”). SMEs are now allowed to present the technical documentation outlined in Annex IV in a simplified format.55 The European Commission has yet to create this simplified form, with all the uncertainty it entails. But considering in any case that Article 11 is the sole provision in Section 2 on “Requirements for high-risk AI systems” to include an exemption for SMEs, this isolated accommodation appears insufficient to prevent market distortions.

Additionally, the provisions on regulatory sandboxes may have limited effectiveness in offsetting the regulatory burdens imposed on SMEs. Efforts are made to attract SMEs to sandboxes by providing free of charge access56 and ensuring the application and selection process is “simple, easily intelligible, and clearly communicated.”57 But the requirements for testing high-risk AI systems in “real world conditions” remain burdensome and raise concerns about whether sandboxes will effectively appeal to SMEs.58 For example, sandboxed companies will be required to submit their testing plans for approval by the market surveillance authority.59 The process conflicts with the approach of many—though not all—SMEs focused on rapid, permissionless innovation to challenge incumbents. Moreover, the AI Act introduces further bureaucratic hurdles by requiring companies to secure dated and documented consent from participants before testing.60 These layers of bureaucracy cast doubt on the ability of sandboxes to effectively promote innovation and competitiveness. Absent strong positive assumptions, their effects will need to be validated through empirical studies.

B. Access to the Single Market

The AI Act is a complex piece of regulation whose impact on the internal market cannot be fully anticipated. This does not mean that none of its effects can be foreseen. Several reasons are outlined below explaining why the AI Act will make it more challenging for companies to access the single market.

First, some of the language used in the AI Act is unnecessarily vague. Let us consider several examples. Article 5 of the AI Act prohibits the deployment or use of AI systems that employ subliminal techniques beyond a person’s consciousness. It also bans intentionally manipulative or deceptive AI systems. These types of AI systems are forbidden when they are designed to, or have the effect of, materially distorting an individual’s or group’s behavior by impairing their ability to make informed decisions, resulting in actions they would not have otherwise taken, and causing or being likely to cause significant harm to that person, another individual, or a group.61 Depending on how it is interpreted, AI Act Article 5 could prohibit segments of AI-based advertising in Europe. Companies in the field will likely litigate, while market surveillance authorities will face difficulties in proving that the AI system is “materially distorting” behaviors in a way to create “significant” harm.

Article 5 of the AI Act also prohibits the use of “real-time” remote biometric identification systems in public spaces for law enforcement purposes, unless one of several exceptions is met.62 The meaning of “real-time” will give rise to litigation.63 If a one-second delay will likely be deemed “real-time,” a twenty-second delay will be more debatable. A limit for what constitutes “real-time” will have to be found, likely depending on the specifics of the case.64 Here again, litigation will likely flourish on the issue. And while competition between providers of biometric identification systems in “batch mode” (i.e., not in real-time) will survive, the vagueness of the AI Act’s prohibitions will likely affect adjacent use cases.

Similar concerns about the vagueness of the AI Act also arise with high-risk AI systems. Article 10, for example, holds that data sets “shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.”65 This version marks a clear improvement over the original proposal by the European Commission, which required data sets to be entirely “free of errors.”66 Although the original wording was precise, it was unrealistic. Data and computer scientists generally agree that large databases cannot be fully completely error-free.67 Some of the datasets used to train basic models have trillions of entries, companies cannot manually check them all to ensure compliance with Article 10.68 And even if they tried, the cost would be prohibitive. But while the adopted version of Article 10 is superior to the original one, it uses ambiguous terms such as “sufficiently” and “to the best extent possible” which are likely to result in litigation.

Article 14 of the AI Act adds to the confusion around high-risk systems by requiring that individuals responsible for human oversight “remain aware” of the potential for over-reliance on the output generated by high-risk AI systems.69 This leaves companies uncertain about how to ensure that their employees always maintain awareness of these AI-related risks.

Despite the vagueness of the language, the fines for non-compliance are set high. According to Article 99, non-compliance with Article 5 on prohibited AI practices will result in fines for individuals or companies of up to 7% of their total worldwide annual turnover or €35 million, whichever is the greater.70 For the rest, non-compliance with other articles of the European Commission’s AI Act will amount to 3% of the total worldwide annual turnover or €15 million.71 Finally, non-compliance with Article 5 by EU institutions, agencies and bodies will amount to fines up to €1,500,000, while non-compliance with the other articles will amount to fines up to €750,000.72

Second, several of the provisions listed in the AI Act are significantly costly to comply with. This is because, when balancing competition against other objectives, the AI Act frequently prioritizes these other objectives over competition. Several examples drive the point home.

When it comes to safety and competition, Recital 73 expresses the intention to require that high-risk systems be designed in a way to allow human oversight of their functioning.73 Article 9 specifies that a risk management system must be established, implemented, documented, and maintained for high-risk AI systems.74 This system shall be a continuous iterative process that is planned and carried out throughout the entire lifecycle of the high-risk AI system, with regular systematic reviews and updates.75 The design and implementation of such risk management systems will come at a cost. To be clear, this is not to suggest that these measures should not be implemented because they are costly, but rather that the incentives to enter the single market are reduced in favor of the security of high-risk systems. The same goes for Articles 14 and 15 of the AI Act. The first mandates of human oversight, 76 while the second requires providers to design high-risk AI systems to achieve and maintain suitable levels of accuracy, robustness, cybersecurity, and consistency throughout their lifecycle.77 These articles reflect the European institutions’ preference for safety over competition.

When it comes to the trade-off between transparency and competition, Article 12 mandates that high-risk AI systems automatically record logs throughout their entire lifecycle.78 The underlying aim of that Article is to increase transparency, not full explainability of AI systems. It remains burdensome. The same goes for Article 13 that requires high-risk AI systems to be designed in a way to allow deployers to understand the system’s output.79 This version is an improvement over the European Commission’s proposal to “enable users to interpret the system’s output and use it appropriately”80—how could users “interpret” the system’s output without access to the training data? It remains problematic because (i) it could allow reverse engineering by the companies holding the logs and (ii) the required degree of transparency must be assessed based on the understanding of third parties, which, by definition, cannot be probed. Transparency is, here again, favored over competition and access to the single market.

Third, the risk of industry capture should not be underestimated. There are two ways for companies to ensure compliance with the AI Act. First, companies can follow one of the standards developed by standardization organizations (the European Committee for Standardisation, the European Committee for Electrotechnical Standardisation, and the European Telecommunications Standards Institute).81 If they do so, providers of AI systems will enjoy a presumption of conformity. Second, companies can assess their own compliance with the essential requirements of the AI Act.82 Should they do so, the AI Act suggests that companies will need to consider both the “intended purpose” and the “generally acknowledged state of the art” in AI and related technologies when addressing the requirements for high-risk AI systems.83 As a result, the majority of companies can be expected to follow standards—rather than assessing compliance themselves—to save costs and increase legal certainty. This dynamic in favor of standards is strengthened by the requirement that self-assessments must be approved by a “notified body” if the conditions outlined in Article 43 of the AI Act are met.84

With this in mind, the risk of these standard-setting organizations—and notified bodies—being captured by a handful of players should be closely scrutinized. As it stands, the interests of small and medium-sized enterprises, consumers, and trade unions are represented, but the internal functioning of these organizations remains opaque.85

EU Regulation 1025/2012 which oversees standard-setting organizations highlights the critical importance of European standards for SME competitiveness. However, it also notes that SMEs are sometimes underrepresented in European standardization efforts.86 To address this issue, Article 17 mandates that SMEs, consumer organizations, environmental and social stakeholders must be “appropriately represented” and given the opportunity to “participate” in European standardization activities.87 The vagueness of this provision is unsatisfying. Making the functioning of these organizations more transparent would help, possibly through a revision of Regulation (EU) No 1025/2012.88

C. General-Purpose AI Models and the EU Single Market

The European Commission’s 2021 proposal did not cover foundation models. The release of ChatGPT in November 2022 changed the dynamics and prompted efforts to include them in the AI Act. The final version of the Act now features an entire chapter dedicated to what EU institutions refer to as “general-purpose AI models” (“GPAI”).89

The regulation of GPAIs in the AI Act constitutes a significant shift as it introduces a new regulatory pillar. The first pillar of the AI Act adopts a ‘risk-based approach,’ where risks are mainly defined by use case (see 3.1. and 3.2). The second pillar introduces a ‘capacity-based approach.’ GPAIs are versatile and can serve many purposes. Consequently, EU institutions have chosen not to tie the regulation of GPAIs to specific use cases but to regulate based on their capabilities: the more capable the GPAI, the more stringent the regulation.

Oddly enough, the distinct nature of these two pillars does not separate them entirely. The second pillar on GPAIs shares at least one common flaw with the first: it distorts competition among (GPAI) providers and creates regulatory barriers to entering the EU single market. Let us unpack this further.

GPAIs are described as AI models capable of demonstrating “significant generality,” often trained on “large amount of data” using “self-supervision at scale.”90 These models are designed to perform a “wide range of distinct tasks” and can be incorporated into “a variety of downstream systems or applications,” excluding those used solely for research, development, or prototyping prior to market release.91 This definition suffers from the use of unnecessary vague terms: GPAIs rely on “large amount” of data and self-supervision “at scale,” they “displays significant generality,” they can perform a “wide range of distinct tasks,” and they can be integrated into a “variety” of downstream systems or applications (emphasis added).

Besides the problematic definition, GPAIs are subject to provisions similar to those imposed on high-risk AI systems (Section 2 of the AI Act). Article 53 requires GPAIs providers prepare and maintain updated technical documentation for the model, which must include details about its training and testing processes as well as the outcomes of its evaluation and, at a minimum, all information listed in Annex XI.92 Providers are also required to prepare, maintain, and make available relevant information and documentation to AI system providers who plan to integrate the general-purpose AI model into their systems. The documentation must allow these providers to clearly understand the model’s capabilities and limitations and, at a minimum, must contain the elements set out in Annex XII.93 Lastly, GPAI providers must establish a policy to comply with Article 4(3) of Directive (EU) 2019/790—which allows to use of copyrighted content for text and data mining unless the rights are explicitly reserved—through state-of-the-art technologies.94 GPAI providers must also publish a detailed summary of the training data used for the general-purpose AI model following a template specified by the AI Office.95

In short, the issue of requiring SMEs to provide detailed technical documentation at very high costs reappears in these provisions. Similarly, compliance with some provisions on GPAIs necessitates guessing whether other AI providers have a “good understanding” of the information sent by the GPAI providers, leading to best guesses and potential litigation. This situation mirrors Article 13, where compliance depends on the understanding of third parties.

The only way companies can avoid these provisions is by offering a model under a free and open-source license. The license must allow access, use, modification, and distribution of the model, while also making its parameters, including weights, architecture details, and usage information, openly available to the public.96 This exception does not apply to general-purpose AI models with systemic risks (see below). Besides, the exception suffers from relying on a purely technical definition of open source that is artificially binary.97 The definition overlooks important aspects of defining the openness of AI systems, such as the inclusion of contribution policy, credit and revenue sharing, exit rights, accessible dispute resolution between contributors, etc.98 This entails that models that are not genuinely open will benefit from the exemption, while truly open models may not. The definition of open source in the AI Act is also inconsistent with the one in the New Product Liability Directive and the Cyber Resilience Regulation.99 Nonetheless, the open-source exemption remains a positive development considering that open-source foster competitive dynamics by fostering transparency and allowing forks.100

Article 51 of the AI Act adds another layer of uncertainty regarding the definition of GPAI with systemic risk. A GPAI is deemed to present systemic risks if demonstrates high-impact capabilities evaluated through “appropriate” technical tools and methodologies.101 What is “appropriate” remains unspecified. The Act only specifies that a general-purpose AI model is presumed to have high-impact capabilities if the total computational power used for its training, measured in floating point operations, exceeds 1025.102 As of April 2024, four foundation models are above that threshold: Gemini Ultra, Mistral Large, GPT-4, and Inflection 2.103 Alternatively, a GPAI is deemed to present systemic risks if the Commission, either acting on its own initiative or in response to a qualified alert from the scientific panel, determines that its capabilities or impact are equivalent to those described in Article 51(1a).104 The criteria provide the Commission with considerable discretion, further bolstered by its power to adopt delegated acts to modify the thresholds, benchmarks, and indicators that define a GPAI with systemic risk.105 Following the designation process, the Commission is required to publish a list of general-purpose AI models identified as having systemic risk and to maintain and regularly update that list.106

The provisions imposed on GPAIs with systemic risks, in addition to those applied to all GPAIs, require model evaluation, risk assessment and mitigation, incident tracking and reporting, and enhanced cybersecurity protection.107 Companies will be presumed compliant with Article 55 if they rely on an approved code of practice (Article 56) or with a European harmonized standard. Alternatively, companies must provide evidence of alternative, adequate compliance measures for evaluation by the Commission.108 This situation, once more, raises concerns about the representation of SMEs within standard-setting organizations.

D. Fixed, Non-adaptive Issues

The various issues listed in sections A., B. and C. will eventually translate into facts —and empirical evidence—that EU institutions will be able to observe. But regardless of their findings, EU institutions will not be able to adapt the substance of the AI Act. This is because despite promising language (i.e., various mentions to a “future-proof” regulatory framework that “can be adapted where necessary”),109 the AI Act lacks the mechanisms to be truly adaptive.

First and foremost, the Article 4 introduced by the European Commission to let it “amend the list of techniques and approaches listed in Annex I” (i.e., to change the definition of what constitutes an AI system) has been removed from the final version of the AI Act.110

The Commission retains the power to amend the list in Annex III by removing certain high-risk AI systems, i.e., to change the definition of what constitutes a high-risk AI system.111 That power is scattered over the AI Act. The Commission may only exercise that power under certain conditions: (i) the high-risk AI system must no longer present substantial risks to fundamental rights, health, or safety; and (ii) its removal must not compromise the overall level of protection for health, safety, and fundamental rights established under Union law.112 In addition, Article 6 empowers the Commission to adopt delegated acts to reclassify AI systems listed in Annex III (high-risk AI systems) as not high-risk.113 Finally, Article 112 empowers the Commission to suggest amendments on Annex III (high-risk systems), Article 5 (prohibited AI systems) and Article 50 (specific AI systems such as emotion recognition system, generated deep fake, etc.).114 A broader mandate is given to the Commission to “submit appropriate proposals to amend this Regulation (…).”115 These possible amendments referenced in Article 112 will require the approval of the European Parliament and the Council.

Perhaps equally important is Article 112(7) of the AI Act which requires the Commission to assess the impact and effectiveness of voluntary codes of conduct. This evaluation, to be conducted by August 2, 2028, and every three years thereafter, seeks to encourage compliance with the requirements for high-risk AI systems even among those not classified as high-risk. It may also include additional requirements for non-high-risk AI systems, such as those related to environmental sustainability.116 In essence, the Commission is laying the groundwork to enforce compliance with new standards in the future. These requirements would come in addition to the current provisions that encourage, but do not require, companies providing non-high-risk AI systems to develop codes of conduct.

In short, the AI Act simply allows the Commission to amend the list of “high-risk” AI systems without requiring approval from other EU institutions. The list of prohibited AI systems and general-purpose AI models is not directly adaptable by the Commission, nor is the list of obligations imposed on high-risk and general-purpose AI systems. If the provisions prove to be ineffective in achieving their objectives and/or detrimental to innovation, the European Commission will be required to write a report and convince other EU institutions to act upon it.

IV. CONCLUSION

The AI Act is a future pillar of competition law and should be treated as such. It complements the DMA, the Data Act, and others in creating a new environment for the regulation of digital businesses. No competition practitioner can afford to ignore its intricacies.

The AI Act is also expected to impact competitive dynamics. While it helps prevent market fragmentation, which is a positive outcome, it also creates barriers to entry into the EU single market and introduces market distortions that could have been avoided. This is particularly concerning given that several scholars have criticized the AI Act for its ineffectiveness in protecting fundamental rights.117 The AI Act’s negative impacts on innovation and competition are not necessarily offset by corresponding benefits.

But the situation is not beyond repair. The AI Act’s ultimate impact depends on what we make of it. We can collectively strengthen its benefits and mitigate its drawbacks by engaging with policymakers, documenting and learning from its effects, integrating it into everyday competition law practice, and educating lawyers, economists, data scientists, and computer scientists around it. The key prerequisite for all this is combining our expertise. There lies the biggest challenge ahead.

Footnotes

1

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (hereafter “AI Act”), Article 2.

2

Baily, M. N., et al. (2023) Machines of Mind: The Case for an AI-Powered Productivity Boom. The Brookings Institution, 9 October. Available at: https://perma.cc/6D8P-8BPY; Chui, M., et al. (2023) The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey & Company. Available at: https://perma.cc/MJE7-GAWN.

3

European Commission (2021) Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021) 206 final) (hereafter “European Commission’s proposal”), p. 4.

4

AI Act, Article 70.

5

AI Act, Article 28.

6

AI Act, Article 74(12). Note that the European Parliament’s AI Act proposal eliminated access to the API, see Article 64 of the Amendments adopted by the European Parliament on 14 June 2023, on the proposal for a regulation of the European Parliament and of the Council on laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM(2021)0206—C9–0146/2021–2021/0106(COD))1 (hereafter “European Parliament’s proposal”).

7

AI Act, Article 74(13). Note that the European Parliament’s AI Act proposal changed “source code” to “training and trained models of the AI system, including its relevant model parameters,” see Article 64 of the Parliament’s proposal.

8

AI Act, Article 74.

9

See the European Commission’s proposal, article 63; also, the European Parliament’s proposal, Article 63.

10

Note that market surveillance authorities can also include representatives from national competition agencies. This is exemplified in Hungary, where a government resolution, 1301/2024. (IX. 30.) Korm. Határozat, mandates the creation of a new body with representatives from the Hungarian Competition Authority to fulfill the roles of both the notifying authority and the market surveillance authority, https://perma.cc/7QYM-J486.

11

Sweden (2018) Investigative Power in Practice—Breakout Session 2: Requests for Information—Limits and Effectiveness, Global Forum on Competition (DAF/COMP/GF/WD(2018)38), recital 6. Available at: https://perma.cc/D9WA-9XGL.

12

Article 18 of the Regulation 1/2003.

13

Article 8 of the Directive ECN+.

14

Article 8 of the Directive ECN+.

15

Schwenk Zement v Commission, Case C-248/14 P, ECLI:EU:C:2016:150.

16

Article 26(1) of the DMA.

17

OECD (2013) Ex Officio Cartel Investigations and the Use of Screens to Detect Cartels, p. 9, 108.

18

HeidelbergCement AG v Commission, Case C-247/14 P, ECLI:EU:C:2016:149, para. 20.

19

See Sweden (2018) Investigative Power in Practice—Breakout Session 2: Requests for Information—Limits and Effectiveness, Global Forum on Competition (DAF/COMP/GF/WD(2018)38), recital 6. Available at: https://perma.cc/D9WA-9XGL.

20

Schrepel, T. (2021) Computational Antitrust: An Introduction and Research Agenda, Stanford Computational Antitrust, 1; Schrepel, T. and Groza, T. (2022) The Adoption of Computational Antitrust by Agencies: 2021 Report, Stanford Computational Antitrust, 2, p. 78.

21

Schrepel, T. and Groza, T. (2023) The Adoption of Computational Antitrust by Agencies: 2nd Annual Report, Stanford Computational Antitrust, 3, p. 55; Schrepel, T. and Groza, T. (2024) Computational Antitrust Within Agencies: 3rd Annual Report, Stanford Computational Antitrust, 4, p. 53.

22

See Stanford Computational Antitrust’s publications, https://perma.cc/Q6K6-9GDU.

23

AI Act, Recital 59.

24

Annex III of the AI Act.

25

AI Act, Recital 61.

26

AI Act, Recital 61.

27

Article 3(45b) of the AI Act.

28

OECD Secretariat (2020) Criminalisation of Cartels and Bid Rigging Conspiracies: A Focus on Custodial Sentences, Working Party No. 3 on Co-operation and Enforcement. Available at: https://perma.cc/TF3F-ERD3.

29

Ibid.

30

AI Act, Article 19.

31

European Commission (2020) Antitrust: Commission sends Statement of Objections to Amazon for the use of non-public independent seller data and opens second investigation into its e-commerce business practices, Press Release, 10 November. Available at: https://perma.cc/63LX-V5XV.

32

AI Act, Article 16.

33

AI Act, Article 23.

34

AI Act, Article 24.

35

AI Act, Article 25.

36

European Commission (2023) Guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union to horizontal co-operation agreements, C 259/1, para. 384.a

37

Ibid., para 386.

38

AI Act, Article 66(h).

39

Page 93 of the European Commission’s proposal.

40

Impact Assessment Accompanying The Proposal For A Regulation of The European Parliament And of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts {COM(2021) 206 Final}; European Commission’s proposal, recital 1.

41

Page 12 of the European Commission’s proposal.

42

Page 8 of the European Commission’s proposal.

43

The same input will always produce the same output using deterministic AI systems.

44

Non-deterministic AI systems may produce different outputs for the same input.

45

Chen, C., et al. (2022) Privacy regulation and firm performance: Estimating the GDPR effect globally, The Oxford Martin Working Paper Series on Technological and Economic Change, No. 2022–1; Janssen, R., et al. (2022) GDPR and the Lost Generation of Innovative Apps, National Bureau of Economic Research, No. w30028; Johnson, G. A., Shriver, S. K., and Goldberg, S. G. (2023) Privacy and Market Concentration: Intended and Unintended Consequences of the GDPR, Management Science.

46

Demirer, M., Jiménez Hernández, D. J., Li, D., and Peng, S. (2024) Data, Privacy Laws and Firm Production: Evidence from the GDPR, National Bureau of Economic Research, Working Paper No. w32146.

47

European Commission’s AI Act, Page 9.

48

AI Act, Article 2(3).

49

AI Act, Article 11(1).

50

AI Act, Article 11(1).

51

AI Act, Annex IV.

52

AI Act, Article 11(1); Annex IV(2f); Annex IV(2 g).

53

Himes, J. L., et al. (2021) Antitrust Enforcement and Big Tech: After the Remedy Is Ordered, Stanford Computational Antitrust, 1, pp. 64, 71.

54

See the earlier draft of the present article at https://perma.cc/NPB7-MBVA.

55

AI Act, Article 11(1).

56

AI Act, Article 58(1d)

57

AI Act, Article 58(1 g)

58

AI Act, Article 60.

59

AI Act, Article 60(4a) and Article 60(4b).

60

AI Act, Article 61.

61

AI Act, Article 5(1a).

62

AI Act, Article 5(1 h).

63

AI Act Recital 17 mentions identification that “occur all instantaneously, near-instantaneously or in any event without a significant delay.”

64

Article 3 of the AI Act defining “real-time” as “not only instant identification, but also limited short delays in order to avoid circumvention” is vague.

65

AI Act, Article 10(3).

66

Article 10 of the European Commission’s proposal. We welcome the final version of Article 10 as, in a previous draft, we suggested “ratifying the European Parliament’s version of Article 10 as far as the training of datasets is concerned.”

67

Paullada, A., et al. (2021) Data and its (dis)contents: A survey of dataset development and use in machine learning research, Patterns, 2; Boyd, D. and Crawford, K. (2012) Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon, Information, Communication & Society, 15, p. 662; Northcutt, C. G., et al. (2021) Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. Available at: https://perma.cc/G4QL-2GNT.

68

Schrepel, T. and Pentland, A. (2024) Competition between AI Foundation Models: Dynamics and Policy Recommendations, Industrial and Corporate Change, dtae042; Paullada, A., et al. (2021) Data and its (dis)contents: A survey of dataset development and use in machine learning research, Patterns, 2.

69

AI Act, Article 14(4).

70

AI Act, Article 99(3).

71

AI Act, Article 99(4).

72

AI Act, Article 100(2) and Article 100(3).

73

AI Act, Recital 73.

74

AI Act, Article 9(1).

75

AI Act, Article 9(2).

76

AI Act, Article 14.

77

AI Act, Article 15(1).

78

Article 12 of the European Commission’s AI Act.

79

AI Act, Article 13(1).

80

European Commission’s proposal, Article 13(1).

81

AI Act, Article 40; Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardization.

82

AI Act, Article 43.

83

AI Act, Article 8.

84

AI Act, Article 43(1).

85

European Commission (2022) Key players in European Standardisation, Internal Market, Industry, Entrepreneurship and SMEs. Available at: https://perma.cc/HAR2-8GTD.

86

Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardization, recital 21.

87

Ibid., Article 17.

88

An earlier draft of the present study also suggested to amend Article 40 of the AI Act to make the validity of harmonized standards conditional on the European standardization organizations having at least one third of their seats held by SMEs when adopting standards related to the AI Act, see  https://perma.cc/NPB7-MBVA.This was not done.

89

AI Act, Chapter V: General-Purpose AI Models.

90

AI Act, Article 3(63).

91

AI Act, Article 3(63).

92

AI Act, Article 53(1a). Annex XI requires detailed information such as “the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made,” “information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies,” and so forth.

93

AI Act, Article 53(1b). Annex XII also requires detailed information such as “how the model interacts, or can be used to interact, with hardware or software that is not part of the model itself, where applicable,” “the technical means (e.g., instructions for use, infrastructure, tools) required for the general-purpose AI model to be integrated into AI systems;” “information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies,” and so forth.

94

AI Act, Article 53(1c).

95

AI Act, Article 53(1d).

96

AI Act, Article 53(2).

97

AI Act, Recital 102.

98

Schrepel, T. and Potts, J. (2025, forthcoming) Measuring the Openness of AI Foundation Models: Competition and Policy Implications, Information & Communications Technology Law.

99

European Parliament (2024) Legislative resolution of 12 March 2024 on the proposal for a directive of the European Parliament and of the Council on liability for defective products (COM(2022)0495—C9–0322/2022–2022/0302(COD)), Recital 14; European Parliament (2024) Legislative resolution of 12 March 2024 on the proposal for a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and amending Regulation (EU) 2019/1020 (COM(2022)0454—C9–0308/2022–2022/0272(COD)), Recital 18.

100

Schrepel, T. and Pentland, A. (2024) Competition between AI Foundation Models: Dynamics and Policy Recommendations, Industrial and Corporate Change, dtae042.

101

AI Act, Article 51(1a).

102

AI Act, Article 51(2).

103

Rahman, R., Owen, D. and You, J. (2024) Tracking Large-Scale AI Models, Epoch AI. Available at: https://perma.cc/TA94-3E4P; White House (2023) Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 30 October. Available at: https://perma.cc/RN2E-WGDU.

104

AI Act, Article 51(1b). Annex XIII specifies that the European Commission shall consider diverse criteria such as the number of parameters of the model; the quality or size of the data set, the amount of computation used for training the model, the number of registered end-users and business users, etc.

105

AI Act, Article 51(3).

106

AI Act, Article 52(6).

107

AI Act, Article 55(1).

108

AI Act, Article 55(2).

109

AI Act, Recitals 138 and 173.

110

European Commission’s proposal, Article 4.

111

AI Act, Article 7(3).

112

AI Act, Article 7(3).

113

AI Act, Article 6(3).

114

AI Act, Article 112(1).

115

AI Act, Article 112(10).

116

AI Act, Article 112(7).

117

Kusche, I. (2024) Possible Harms of Artificial Intelligence and the EU AI Act: Fundamental Rights and Risk, Journal of Risk Research, 1; Almada, M. and Radu, A. (2024) The Brussels Side-Effect: How the AI Act Can Reduce the Global Reach of EU Policy, German Law Journal, 1; Aloisi, A. and De Stefano, V. (2023) Between Risk Mitigation and Labour Rights Enforcement: Assessing the Transatlantic Race to Govern AI-Driven Decision-Making Through a Comparative Lens, European Labour Law Journal, 14, p. 283.

Author notes

Associate Professor of Law at the Vrije Universiteit Amsterdam, Faculty Affiliate at the Stanford University CodeX Center. Email address: [email protected]. No external funding was received or relied upon for this paper. I would like to thank the Vrije Universiteit Amsterdam’s 3rd year Artificial Intelligence classes of 2022, 2023, and 2024 to which I taught a course on the AI Act, for allowing me to delve deeply into the various drafts, forcing me to get up to speed on their technical details, and for giving me perspective on their most controversial proposals. I am indebted to Faith Obafemi for her great editorial assistance. Please note that an initial draft of this article was published in October 2023. That version included proposals for improving the AI Act, which was still under negotiation at the time. It can be access here: https://perma.cc/NPB7-MBVA. The final version of this article focuses on the AI Act as it appears in the Official Journal.

This is an Open Access article distributed under the terms of the Creative Commons Attribution NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact [email protected]