Skip to main navigation Skip to search Skip to main content

Is the DMA ready for agentic AI?

Research output: Book/ReportReport

71 Downloads (Pure)

Abstract

The future of AI is agentic. We can discern the first signs of agentic AI in the widely used assistants (e.g., ChatGPT) and, more clearly, in the prototypes of AI agents (e.g., Operator). AI agents do not simply respond to user inputs but exhibit proactive, autonomous behaviour based on a user-set prompt or goal – or even based on a user need inferred by the AI agent. In fulfilling tasks, AI agents adapt and learn from their experiences, interact with the wider digital and physical environment, rely on persistent memory, and integrate with external tools and applications (including other AI agents).
Agentic AI has the potential to transform consumer behaviour and, accordingly, to disrupt existing gatekeepers in platform markets from operating systems (OSs) to search and e-commerce. There are, however, two competition risks: one more immediate, the other more distant. The more immediate risk is that incumbents in markets that AI agents rely on limit the growth of third-party AI agents in favour of their own agents and/or to safeguard their position in markets threatened by agentic AI (“foreclosure of AI agents”). The more distant risk is that AI agents become gatekeepers in their own right and use that position to anticompetitively steer demand (“foreclosure by AI agents”).
This report systematically discusses the appropriate regulatory response to these two risks. While reference is made to competition law, our focus is on the Digital Markets Act (DMA), which is likely the best contender for addressing contestability issues related to consumer-facing digital services in the context of AI agents. However, the DMA was not adopted with AI in mind, and while competition authorities have published reports on generative AI/foundation models, they have not studied AI agents. In other words, despite the (expected) rise of AI agents, there is currently little regulatory guidance, which is a gap that this report fills.
The questions tackled in the report are the following: (i) what are the potential competition concerns (foreclosure of vs by AI agents); (ii) do the firms at the source of these concerns fall under the scope of the DMA; (iii) are the DMA’s obligation meaningful in this context? Based on the answers to these questions, we make recommendations to ensure that the DMA remains an adequate regulatory response in case the potential competition risks manifest themselves. In answering the above questions, we take into consideration several policy interests, including regulatory effectiveness, legal (and investment) certainty, and political legitimacy – and do so amidst a transatlantic debate on the desirability of intervention in (nascent) digital markets.
The foreclosure of AI agents could have several sources. First, AI agents rely on foundation models as an input. Agent providers can currently rely on third-party models that are made available in open-source or via API calls. Should such access be degraded in the future, however, building one’s own state-of-the-art foundation model is – especially given data requirements – a difficult proposition. Second, AI agents need to “live” within a (mobile) device, and need access to its hardware and software features to function effectively. Device makers with their own agent may want to advantage their own agent (e.g., via pre-installation and default status) and/or disadvantage those of competitors (e.g., via decreased interoperability).
The scope of the DMA presents no significant gaps when it comes to the foreclosure of AI agents. The most likely chokepoint is the (mobile) OS. OSs qualify as core platform service (CPS) and the most relevant ones (iOS, Android, Windows) have been designated with gatekeeper status. AI foundation models do not qualify as CPS while one of their key inputs, cloud services (for compute), do. Given the current state of the market, this coverage appears sufficient.
The DMA imposes a number of obligations on gatekeepers that can help prevent the foreclosure of AI agents by keeping their OSs open. First, OS gatekeepers must allow users to uninstall a pre-installed agent as long as it is not essential (Article 6(3), para 1). Second, if the OS gatekeeper’s own AI agent is designated as virtual assistant, web browser or search engine, it must go a step further and show an AI agent choice screen (Article 6(3), para 2). Third, the OS gatekeeper must provide third-party AI agents the same interoperability with hardware and software features as is available to its own agent (Article 6(7)). Fourth, if the agent itself is designated as CPS with gatekeeper status (an issue to which we return below), data portability must be guaranteed (Article 6(9)–(10)).
To different degrees, these obligations – along with some others – can help prevent input foreclosure (through denial of equal access to data and on-device hardware and software features) and distribution foreclosure (through pre-installation, self-preferencing and tying). A major challenge will be technical: to function effectively, AI agents require deep integration with the OS, which makes interoperability and the replacement of first-party AI agents complex.
To ensure the DMA adequately covers foreclosure of AI agents, we make two recommendations. The first is to extend the click and query data-sharing obligations of Article 6(11) to include virtual assistants. Given the critical role of such data for AI agent contestability, this measure would enhance market openness. The second recommendation is to add virtual assistants alongside browser engines and other services in Article 5(7), which prohibits gatekeepers from forcing business users to integrate with certain ancillary services. This would help ensure that users retain freedom in their choice of AI agents and, as a result, foster competitive dynamics between such agents. A possible side-recommendation is to extend the CPSs covered by the FRAND access obligation of Article 6(12) to include cloud computing services, which could help ensure non-discriminatory access as the market sees increasing vertical integration.
In the future, foreclosure by an AI agent could arise. This depends on the development trajectory of AI agents. It may tend towards concentration due to market features (e.g., the importance of learning from users) or conduct of players in control of key inputs and distribution channels (e.g., those in control of device hardware and software). Should only a few agents remain, or more remain but users are nevertheless “locked in” to their AI agent, then agent providers could use their demand-steering power to foreclose. Such foreclosure generally takes the form of leveraging, in which the AI agent provider props up their other services (or affiliated services) via tying or self-preferencing.
The scope of the DMA becomes important when it comes to foreclosure by an AI agent. In this scenario, the agent itself is the chokepoint, so the question is whether it is subject to the DMA. Given the DMA’s wide CPS definitions, the first generations of AI agents may qualify as search engines or web browsers; in the future, a qualification as OS may even become appropriate. Designation in the virtual assistant CPS category is, however, most suitable. The definition (the processing of “demands, tasks or questions”) can accommodate AI agents, even if it was adopted with voice assistants in mind.
Nevertheless, we recommend mitigating potential ambiguities in (and around) the definition of “virtual assistant”. First, the DMA’s Annex, which identifies business users of virtual assistants as developers that make their app accessible via the assistant, is restrictive: some developers may actively make their apps accessible via AI agents, but others are passively “called upon” by the agent. Hence, an amendment of the Annex via delegated act is recommended. Second, as the distance between AI agents and the virtual assistant definition grows, e.g., due to their autonomy (reacting to user “needs” rather than input), legal certainty would benefit from legislatively adapting/replacing the definition. This has the added benefit of giving the inclusion of AI in the DMA political legitimacy.
A number of DMA obligations can help prevent foreclosure by AI agents, though their applicability depends on the CPS category in which the agent is designated. If the agent is designated in either the OS, virtual assistant or web browser CPS category, and it defaults to another service, the gatekeeper must make that default setting changeable; in addition, it must show a search engine choice screen (Article 6(3), para 2). If the agent is designated as search engine or virtual assistant, the gatekeeper cannot self-preference in ranking (Article 6(5)). As an OS or virtual assistant, the AI agent must grant equal interoperability, e.g., with connected devices (Article 6(7)). A number of other obligations related to data and contractual restrictions apply independent of CPS category.
The in our view most fitting qualification as virtual assistant leads to correct coverage in terms of obligations to prevent foreclosure by an AI agent. Hence, assuming AI agents are designated within the virtual assistant category (or in a future AI agent category), concerns about foreclosure by AI agents do not currently warrant amendments of the DMA.
In conclusion, the DMA is surprisingly ready for agentic AI. The DMA’s unexpected future-proofness in this context stems from two facts: (i) the source of foreclosure of AI agents is likely to be found in CPSs that are already designated, particularly (mobile) OSs; and (ii) legislators added a “virtual assistant” category to the DMA’s list of CPSs, with a wide definition that can plausibly accommodate AI agents. Some amendments, in particular to the definitions of business users and – over time – virtual assistants, as well as the obligations in Articles 5(7), 6(11) and 6(12), could help safeguard the DMA’s future-proofness.
Original languageEnglish
Place of PublicationBrussels
PublisherCentre on regulation in Europe (CERRE)
Number of pages57
Publication statusPublished - 3 Jul 2025
EventCERRE public webinar “AI and the Future of Competition“ -
Duration: 3 Jul 2025 → …
https://youtube.com/live/Ak5YXX6T6zo?feature=share

Fingerprint

Dive into the research topics of 'Is the DMA ready for agentic AI?'. Together they form a unique fingerprint.

Cite this