top of page

Opinion

Answer Engines Reshape the Information Economy

Christophe Carugati

18 November 2024

Generative AI

As answer engines drive a new information economy, they should freely use content for AI training but compensate content creators when their work is used to generate answers.

The information economy hinges on a powerful gateway: search engines. These platforms connect users to content creators by displaying search results in response to user queries, enabling both search engines and creators to generate revenue—primarily through advertising. But the rise of “answer engines” is disrupting this model. By delivering direct answers instead of links to content, answer engines risk reducing both the visibility and revenue of original creators. This shift has prompted calls from content creators, regulators, and policymakers to reconsider the foundations of the information economy.

 

Answer engines, in essence, have become content creators themselves. Even when they credit sources, answer engines directly compete with original creators, potentially undercutting the model that sustains them. This competition could threaten content creators' revenue by limiting the traffic flowing to their sites by up to 25 per cent, prompting some publishers to pursue legal action to seek compensation and consent over the use of their content.

 

A recent lawsuit in the United States, where News Corp accuses Perplexity.AI of using its content to generate AI summaries without permission, exemplifies this growing conflict. The outcome could require Perplexity.AI to either compensate News Corp or cease operations, setting a precedent with potentially significant implications for answer engines.

 

Some answer engines are taking steps to address these copyright issues through partnerships with content creators. For example, OpenAI has partnered with Le Monde to use the publication’s content to train its AI models and generate answers in exchange for compensation and acknowledgement of the source. However, these partnerships only partially address copyright concerns, as they typically involve select creators under exclusive or non-exclusive terms that remain undisclosed.

 

Such arrangements also raise competitive concerns. Exclusive partnerships, for instance, could restrict rival answer engines’ access to the same content, potentially squeezing them out of the market. Even non-exclusive agreements could create barriers if licensing costs are prohibitive, resulting in a quasi-exclusive scenario that disadvantages smaller players.

 

In response, competition authorities of the G7 countries are advocating for a new framework that would mandate consent and compensation for content creators, thus fostering sustainable content creation for AI training. This renewed “pay for news” model has gained political backing in the United Kingdom, building on long-standing demands from publishers for compensation when platforms like Google display their content. However, these laws remain contentious: while platforms undeniably drive traffic to publishers by displaying links, the degree to which platforms benefit from publishers’ content is less clear. To address this, Google is currently running a limited experiment in Europe, temporarily withholding content from some European publishers to a few Europeans to assess traffic impacts. Critics contend that these laws allow publishers to “have their cake and eat it,” encouraging rent-seeking behaviours as they seek traffic and revenue from traffic.

 

Nonetheless, the shift from traditional search engines to answer engines reinforces the case for a reimagined “pay for news” approach. Unlike search engines, answer engines repurpose original content directly and may diminish traffic to sourced content. In this new landscape, content creators could lose the incentive to produce high-quality work, ultimately depriving users of reliable information. Yet, a compensation model is not without challenges. Many creators—journalists, academics, and others—rely on freely accessing existing information they read, see, or listen to produce an original new work. If they, too, must pay to reuse information, the incentive to create new content could diminish, a concern that answer engines frequently raise in their defence. In the U.S., answer engines argue that their work constitutes “fair use,” as the generated answers often differ substantially from the source material and, therefore, should not pay for reuse.

 

Whether or not courts ultimately define this as “fair use,” one thing is clear: a new information economy is taking shape. To navigate this, policymakers must distinguish between two data usage stages—model training and grounding content generation. At the training stage, answer engines use vast and diverse datasets, with each source contributing only tiny fragments. Once the model is trained, in most cases, the output generated does not reproduce content verbatim from any single source, nor can it always identify specific sources used in the answers. At this stage, it is reasonable for model developers to freely use content, as this process neither replicates content nor diverts traffic from content creators and thus does not affect their financial interests. This approach has already been implemented in Japan.


In contrast, at the grounding phase, answer engines directly depend on specific content to generate answers and can attribute the source material. At this stage, answer engines should compensate creators based on a fair model. One approach could involve sharing ad revenue with content creators featured in their responses. Such a system could incentivise high-quality content production by rewarding creators based on the value and quality of their contributions.

 

In this way, the rise of answer engines presents an opportunity to drive a new wave of innovation in how users engage with information, while also encouraging investment in high-quality content creation.

Keywords

Generative AI

Competition Policy

About the paper

This paper is part of our GenAI and Competition Hub, which strives for responsible generative AI (GenAI) development, ensuring favourable market conditions that benefit all. We address your challenges through tailored research projects, consultations, training sessions, and conferences. We address your challenges through tailored research projects, consultations, training sessions, and conferences. Reach out to join our Hub or for inquiries about research, consultations, training, conferences, or press matters.

About the author

Christophe Carugati

Dr. Christophe Carugati is the founder of Digital Competition. He is a renowned and passionate expert on digital and competition issues with a strong reputation for doing impartial, high-quality research. After his PhD in law and economics on Big Data and Competition Law, he is an ex-affiliate fellow at the economic think-tank Bruegel and an ex-lecturer in competition law and economics at Lille University.

Related content

bottom of page