Agentic AI Raises Alarm Bells for Antitrust Regulators

Watch more: Agentic AI Poses New Antitrust Risks, Legal Experts Warn

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    As agentic artificial intelligence gains traction in enterprise applications, a paper in the June CPI TechREG Chronicle warns that these systems could bring anti-competition scrutiny from regulators.

    The paper, “Agentic AI: Future Issues at the Intersection of Technology, Innovation and Competition Policy,” cited antitrust and other risks arising from the use of AI systems that autonomously complete tasks with minimal or no human supervision.

    “You’re entrusting this AI to do a lot of tasks for you,” activities that work off a dataset containing private information from rival organizations that lead to collusion, co-author Christopher Suarez told Competition Policy International (CPI), a PYMNTS company, in an interview.

    Suarez pointed to the example of RealPage, a company that provides software and data analytics to the rental housing industry. Last year, the U.S. Department of Justice sued RealPage for effectively enabling landlords to collude on rental rates by aggregating their non-public rental information onto its pricing software, according to the lawsuit.

    “It was as if the landlords or the different companies were colluding, which raised antitrust concerns,” Suarez said. “You could see a similar scenario playing out in agentic AI, where this agentic AI is gathering a bunch of pricing information going around the internet, trying to be the agent for this person but doing so in a way that perhaps leads to problems in terms of price-setting or things of that nature.”

    Advertisement: Scroll to Continue

    Suarez and his co-authors warned in the paper, which was published in CPI’s TechREG Chronicle, that agentic AI systems could form market monopolies or become pathways for anticompetitive agreements.

    Moreover, “the notion that one agent can talk to another agent and enter into some sort of agreement, that could be a scenario,” Suarez said, citing the example of an AI agent calling another AI agent to book a hotel room. “We need to be aware of the fact that actual anti-competitive agreements could be reached through agents at some point in the future.”

    However, Suarez said most agentic AI systems that he has seen thus far allow for some type of human intervention. For example, a diner using an AI agent to order food at a pizzeria would still need to approve the transaction.

    “The human-in-the-loop principle is going to mitigate” some of the risks, he said.

    Hub-and-Spoke Conspiracies

    Suarez said the RealPage case resembles “hub-and-spoke” conspiracies in antitrust law, where a central platform, or hub, facilitates data exchanges among independent players — the spokes — resulting in similar pricing behavior without direct communication between rivals.

    Even without explicit collusion, this setup could give rise to what’s called “conscious parallelism,” a potentially illegal pattern of coordinated conduct, he said.

    “If that data is shared, if that data is overlapping, that could raise some of these anticompetitive concerns,” Suarez said.

    As agents are increasingly adopted, “we need a liability regime for it,” Suarez said, adding that the jury is still out on who would be liable for risks introduced by generative AI since it’s still early days for the tech.

    Market Concentration

    Another area of concern is market concentration. Drawing parallels with the DOJ’s case against Google Search, Suarez warned that a developer of agentic AI could use its network effect to dominate the field.

    Generative and agentic AI “are going to be the search of the future,” Suarez said. “They’re going to supplant traditional web searching. Someone tries to be the dominant AI agent provider … and create certain tying arrangements or create certain contractual obligations that allow them to gain market concentration.”

    In their paper, Suarez and his co-authors cautioned that if one or a few AI providers become the default across platforms, “one particular AI agent could become dominant, reducing competition that can spur innovation, control pricing or create optionality.”

    Intellectual Property

    Suarez also highlighted intellectual property challenges tied to interoperability, a key feature for agentic AI systems to communicate with one another. While emerging standards such as the Model Context Protocol and Agent2Agent promote interconnectivity, they may also entrench proprietary control if not properly regulated.

    Open source has been touted as one solution, but it comes with certain restrictions.

    “Open source is not free” in the sense of being completely unencumbered, Suarez said. “Open source has conditions.”

    While users generally don’t pay to use open-source models, other obligations range from attribution (MIT License) to sharing the source code (GNU AGPLv3).

    Copyright and Fair Use

    The data itself could pose copyright risks to enterprises deploying agentic AI. Suarez said he could count at least 48 legal cases relating to fair use of copyrighted training data.

    “The courts are still very much figuring that out,” he said. However, “you need to be aware of the copyright licensing risks.”

    For now, major AI companies are indemnifying their users from copyright lawsuits arising from using their AI models to accelerate adoption. They include OpenAI, Microsoft, Google, AWS, Anthropic and others.

    Global Regulatory Regimes

    Because agentic AI systems collect and process data across jurisdictions, companies face exposure to global regulatory regimes.

    Suarez pointed to the EU AI Act, Korea’s AI law, as well as legislation from various U.S. states. According to the National Conference of State Legislatures, 31 states, Puerto Rico and the Virgin Islands have adopted AI resolutions or enacted legislation.

    Aligning internal AI deployments with corporate ethics policies is also important, Suarez said.

    “There needs to be serious conversations around not just what the law says, but what your corporate values are,” he said.

    However, Suarez said he was surprised that in the President Donald Trump administration’s 28-page AI Action Plan, released last week, “there was not a single mention of agentic AI in the entire document. I was shocked because it is such a big topic right now.”

    Until clearer regulations emerge, Suarez said companies should “be thoughtful about looking at the corpus of regulations” and take proactive steps to ensure transparency, ethical use and competition compliance when deploying agentic AI systems.

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.