top of page


Ensuring a Coherent European Regulatory Approach to Artificial Intelligence

Christophe Carugati

10 June 2024

Generative AI

The DMA intersects with AI-related laws like data protection, IP rights, and content moderation. The DMA's High-Level Group should establish a collaborative framework and handle joint cases for consistent enforcement.



Artificial intelligence (AI) is the priority of supervisory authorities worldwide due to its rapid technological adoption. In Europe, the High-Level Group (HLG) for the Digital Markets Act (DMA) has issued a joint statement on AI, emphasising the importance of a coherent regulatory approach[1]. The DMA, enforced by the Commission, aims to ensure fairer and more competitive digital markets by imposing obligations on large online platforms like Alphabet, Amazon, Apple, Booking, ByteDance, Meta, Microsoft, and potentially X, which is still under investigation[2].


The HLG comprises several European regulatory bodies, including those responsible for data protection, consumer protection, competition, audiovisual media, and electronic communications, alongside the Commission. This is their first joint position aimed at ensuring regulatory consistency across the DMA and other legal frameworks.


However, the HLG did not clarify how the DMA interacts with other AI-related legal frameworks or how it will collaborate with supervisory authorities to implement and enforce rules consistently. This analysis addresses these gaps by outlining significant cross-regulatory issues and proposing measures to ensure a consistent regulatory approach. It recommends that the HLG define a practical collaborative framework and engage in joint cases to consistently implement and enforce the DMA on AI.

Cross-Regulatory Issues Between the DMA and Other Legal Regimes


Large online platforms develop and deploy AI systems within their products and services. These systems, particularly Generative AI (GenAI) capable of producing content, such as text, images, videos, and songs, from extensive input data, are a priority for competition authorities worldwide, including in the United Kingdom, Portugal, Hungary, Europe, the United States, France, Japan, Canada, India, and the G7 competition authorities[3]. These authorities are concerned that a few large online platforms might dominate the AI sector due to their existing infrastructure, services, and strategic partnerships with AI firms. Although market studies are currently ongoing to understand how GenAI affects competition, authorities are determined to act swiftly to avoid repeating historical delays in digital market interventions[4].


In Europe, the DMA covers AI systems embedded within designated core platform services (CPSs), such as search engines, operating systems, and social networking services. It limits how large online platforms can use data to develop their AI models and how they can deploy models into their services. The DMA also intersects with several cross-regulatory issues relevant to AI development and deployment, including data protection, intellectual property rights (IPRs), and content moderation.


DMA and Data Protection


AI models often use personal data collected from publicly available sources and first-party and third-party products and services. In this context, model developers must comply with the General Data Protection Regulation (GDPR), the DMA, and consumer protection rules. These regulations ensure that users provide informed consent through clear information and user interfaces regarding the use of their data for model development. Additionally, users should receive accurate information about their data when interacting with AI models.


Competition and data protection interact in digital markets[5]. Data protection is a competitive parameter that firms employ to attract users to their platforms, thereby enhancing competition. For example, the European Commission recognised data protection as a significant quality factor in the Microsoft/LinkedIn merger[6]. However, large online platforms can also misuse data protection to stifle competition by imposing unnecessary restrictions on third-party businesses. In the UK, the Competition and Markets Authority (CMA), in collaboration with the Information Commissioner’s Office (ICO), oversees the implementation of Google's Privacy Sandbox, which will phase out third-party cookies and impact firms providing advertising services on Google Chrome[7]. This collaboration aims to ensure that the implementation protects privacy without undermining competition.


The CMA and ICO have issued joint statements on data protection and competition[8], and the harmful design of user interfaces[9], within the framework of the UK Digital Regulation Cooperation Forum (DRCF), which aims to ensure a coherent regulatory approach in the UK. Currently, the CMA and ICO are drafting a joint statement on how data protection applies to the development and use of GenAI [10].


Data protection interventions regarding GenAI are already initiated in Europe[11]. However, measures to safeguard privacy can impose higher regulatory costs and burdens on newcomers and small-and-medium sizes (SME) firms, making it more difficult for them to compete with larger firms that benefit from economies of scale in regulatory compliance[12].


DMA and IPRs


AI models may also use copyrighted data from public sources scraped from the web and proprietary datasets from third parties, such as publishers. In this context, model developers must comply with IPRs, particularly copyright laws. However, they face global legal challenges regarding whether the use of copyrighted data for training AI models is subject to copyright laws, which would require obtaining the consent of rightsholders[13]. While this issue remains unsettled, some media publishers have formed partnerships with model developers, allowing the use of their content in exchange for a fee[14].


In Europe, interventions are already started. Some French policymakers have called for a reform of the European Copyright Directive[15]. Additionally, the AI Act, which governs the use of AI applications, mandates that providers of general-purpose AI models implement a policy to comply with copyright laws and provide a detailed summary of the content used for training AI models. Furthermore, the French competition authority fined Google for not informing publishers about the use of their content in its chatbot Google Bard, violating previous commitments on related rights[16].

IPR interventions to safeguard copyright may make it harder for newcomers and SMEs to enter and compete with larger firms with the financial resources to compensate rightsholders for using their copyrighted data. Moreover, dominant firms might refuse to provide newcomers access to their copyrighted data. If the conditions for a refusal to deal are met under competition laws, competition interventions can make the market more competitive by allowing newcomers to enter and compete[17]. Additionally, the DMA mandates online search engines to share search data with competing providers.


DMA and Content Moderation


AI models might generate synthetic media, such as deep fakes, subject to mitigating measures against harmful or illegal content under the Digital Services Act (DSA), which addresses content moderation online, and transparency requirements under the AI Act. Additionally, these models might produce false and misleading information due to hallucinations, which might infringe on the GDPR, consumer protection rules, and the DSA. In this context, model and application developers impose rules to moderate content on their platforms in line with content moderation regulations. Furthermore, while the DMA mandates providers of application stores, online search engines, and online social networking services to apply fair, reasonable, and non-discriminatory general access conditions for business users, they can also impose content moderation requirements on businesses to comply with the DSA.


Content moderation plays a significant role in AI markets by fostering consumer trust. Interventions are already underway in Europe. The European Commission has requested information on how large online platforms mitigate GenAI risks, such as deep fakes and hallucinations and the manipulation of electoral processes[18].


Clarifying content moderation requirements can provide firms with greater legal certainty and enhance consumer trust. This may lead to increased competition as newcomers enter the market and compete with larger firms, and consumers switch between providers. However, these interventions might also impose disproportionate regulatory burdens and costs on newcomers and SMEs, making it harder for them to compete with established firms. Moreover, content moderation requirements by large online platforms might restrict competition. In the UK, the CMA and the Office of Communications (Ofcom) have released a joint statement on competition and online safety to address this cross-regulatory issue[19].


Measures to Ensure a Consistent Regulatory Approach


The HLG is essential for fostering a collaborative approach to ensure consistency in implementing and enforcing the DMA alongside other AI-related legal regimes. To achieve this, the HLG should implement two key measures.


First, the HLG should establish a practical collaborative framework. Supervisory authorities should release a Memorandum of Understanding (MoU) detailing their collaboration within the DMA framework. These MoUs should cover case allocation, information sharing (including confidential information), and expertise sharing (including staff secondment). Additionally, they should establish an informal Hub modelled on the UK DRCF AI and Digital Hub, enabling digital firms to seek support on complex cross-regulatory issues[20].


Second, the HLG should engage in joint cases. Close collaboration on individual cases will allow supervisory authorities to efficiently and consistently address cross-regulatory issues. For instance, Meta's “pay-or-consent” model is under investigation by three supervisory authorities based on three legal frameworks: data protection[21], consumer protection[22], and the DMA[23]. The Commission and other supervisory authorities should collaborate on this case to protect privacy and consumers without undermining competition.


By implementing these measures, the HLG can ensure a coherent and efficient regulatory environment for AI, balancing protecting privacy and consumer rights with promoting fair competition.


[1] European Commission, High-Level Group for the Digital Markets Act Public Statement on Artificial Intelligence, 22 May 2024 (accessed 4 June 2024). Available at:

[2] See, Christophe Carugati, DMA Tracker, Digital Competition (accessed 5 June 2024). Available at:

[3] Christophe Carugati, GenAI and Competition Hub, Digital Competition (accessed 5 June 2024). Available at:

[4] Sarah Cardell, Opening Remarks at the American Bar Association (ABA) Chair’s Showcase on AI Foundation Models, 11 April 2024 (accessed 5 June 2024). Available at:

[5] Christophe Carugati, The Antitrust Privacy Dilemma, European Competition Journal, 11 February 2023.

[6] M.8124 Microsoft / LinkedIn, 6 December 2026.

[7] CMA, Investigation into Google’s ‘Privacy Sandbox’ Browser Changes (accessed 5 June 2024). Available at:

[8] CMA and ICO, Competition and Data Protection in Digital Markets: A Joint Statement Between the CMA and the ICO, 19 May 2021.

[9] CMA and ICO, Harmful Design in Digital Markets: How Online Choice Architecture Practices Can Undermine Consumer Choice and Control Over Personal Information, 9 August 2023.

[10] DRCF, DRCF Generative AI Adopters Roundtable: High-level Findings, 7 May 2024 (accessed 4 June 2024). Available at:

[11] Nyob, ChatGPT Provides False Information About People, and OpenAI Can’t Correct It, 29 April 2024 (accessed 4 June 2024). Available at:

See also, Garante per la protezione dei dati personali, Artificial Intelligence: Stop to ChatGPT by the Italian SA Personal Data is Collected Unlawfully, No Age Verification System is in Place for Children, 31 March 2023 (accessed 4 June 2024). Available at:

See also, European Data Protection Board, Report of the Work Undertaken by the ChatGPT Taskforce, 23 May 2024 (accessed 4 June 2024). Available at:

[12] OECD, Consumer Data Rights and Competition Background note by the Secretariat, 29 April 2020.

[13] Michael M. Grynbaum and Ryan Mac, The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work, The New York Times, 27 December 2023 (accessed 5 June 2024). Available at:

[14] OpenAI, Global News Partnerships: Le Monde and Prisa Media, 13 March 2024 (accessed 4 June 2024). Available at:

See also, Platforms and Publishers: AI Partnership Tracker (accessed 4 June 2024). Available at:

[15] Théophane Hartmann, French MPs Want to Amend EU’s Copyright Rules to Cover Generative AI, Euractiv, 20 January 2024 (accessed 4 June 2024). Available at:

[16] Autorité de la concurrence, Related Rights: The Autorité Fines Google €250 Million for Non-Compliance with Some of its Commitments Made in June 2022, 20 March 2024 (accessed 4 June 2024). Available at:

[17] Inge Graef, Thomas Tombal, and Alexandre de Streel, Limits and Enablers of Data Sharing: An Analytical Framework for EU Competition, Data Protection and Consumer Law, Background Note for the Meeting of the Digital Clearinghouse of 19 November 2019, 2019.

[18] European Commission, Commission Sends Requests for Information on Generative AI Risks to 6 Very Large Online Platforms and 2 Very Large Online Search Engines Under the Digital Services Act, 14 March 2024 (accessed 5 June 2024). Available at:

[19] CMA and Ofcom, Online Safety and Competition In Digital Markets: A Joint Statement Between the CMA and Ofcom, 14 July 2022.

[20] DRCF, Welcome to the DRCF AI and Digital Hub (accessed 5 June 2024). Available at:

[21] Noyb, Noyb Files GDPR Complaint Against Meta Over “Pay or Okay”, 28 November 2023 (accessed 5 June 2024). Available at:

See also, EDPB, Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms, 17 April 2024 (accessed 5 June 2024). Available at: 

See also, BEUC, Consumer Groups Launch Complaints Against Meta’s Massive, Illegal Data Processing Behind its Pay-Or-Consent Smokescreen, 29 February 2024 (accessed 5 June 2024). Available at:

[22] BEUC, Consumer Groups File Complaint Against Meta’s Unfair Pay-Or-Consent Model, 30 November 2023 (accessed 5 June 2024). Available at:

[23] DMA.100055 Meta Article 5(2), 25 March 2024.


Generative AI


About the paper

This paper is part of our GenAI and Competition Hub, which strives for responsible generative AI (GenAI) development, ensuring favourable market conditions that benefit all. We help solve your challenges through impartial, high-quality research, strategic solutions in data, AI, and competition, stakeholder engagement, insights, and policy recommendations on complex policy developments. Contact us to join the Hub or for consultation/press inquiries.

About the author

Christophe Carugati

Dr. Christophe Carugati is the founder of Digital Competition. He is a renowned and passionate expert on digital and competition issues with a strong reputation for doing impartial, high-quality research. After his PhD in law and economics on Big Data and Competition Law, he is an ex-affiliate fellow at the economic think-tank Bruegel and a lecturer in competition law and economics at Lille University.

Related content

bottom of page