Skip to main content

After the Google Ruling, Antitrust Became a Blunt Instrument for AI Competition

After the Google Ruling, Antitrust Became a Blunt Instrument for AI Competition

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Antitrust breakups miss the real battleground: AI assistants, not blue links
Prioritize interoperability and open defaults to keep markets contestable
Track assistant-led discovery, not just search share, to safeguard users and educators

While the government's major search case crawled from filing to remedy over nearly five years, AI assistants shifted from curiosity to a daily habit. ChatGPT alone now handles about 2.5 billion prompts each day and logs billions of visits each month. This change in user behavior—where people ask systems to summarize answers rather than provide links—matters more for competition's future than whether a judge forces a company to sell a browser or a mobile operating system. The court's decision not to mandate a breakup fell flat in policy circles, even though Google retained its dominance. By the time the gavel fell, competition had already evolved. Search is becoming a feature of assistants, not the reverse. However, the legal tools remain focused on past battles. If we continue to attack shrinking link lists while assistants capture user demand, we will miss the market and the public interest it serves.

The market shifted while the law took its time

The court did not require Google to sell Chrome or Android. This decision spared the economy from the shock of a significant split. It highlighted a deeper issue: traditional antitrust solutions are too slow and too limited for markets that change rapidly in response to model updates and user habits, which can shift in just weeks. The legal timeline included a 2020 complaint, a 2024 opinion on liability, a spring 2025 trial for remedies, and a September 2025 order that fell short of a breakup, even as it tightened conduct rules. Meanwhile, OpenAI released a web-connected search mode to the public, Microsoft integrated Copilot into Windows and Office, and newcomers like Perplexity turned retrieval-augmented answers into a daily routine. The contest now looks different. It's about who sits between users and the web—who summarizes, cites, and takes action. A solution created for default search boxes cannot, on its own, manage this new critical point.

The numbers tell two stories. On one hand, Google's global search share remains vast, about 90% across devices, with Bing below 4%, suggesting a strong push for a structural fix. On the other hand, the part of the market most relevant to office work and professional research shows Bing creeping up toward double digits worldwide, and AI assistants are taking on queries that used to be traditional searches. A more credible perspective is that the incumbent is not collapsing, but rather, the space for competition has shifted. StatCounter measures shares based on page-view data and tends to underestimate usage within assistant interfaces, while Similarweb's "visits" indicate traffic, not unique visitors. In simple terms, if assistants provide answers before links load, their growth will not immediately be reflected in search-engine shares, even as they capture user intent. Policies that wait for share metrics to change may arrive two product cycles too late.

Figure 1: Bing’s gains are real on desktop, but the contest is migrating to assistants; relying on search-share alone understates where user intent is flowing.

Measuring the real competition: assistants vs. links

A better measure is emerging. Pew reports that the share of U.S. adults using ChatGPT has roughly doubled since 2023, and the Reuters Institute notes that chatbots are now recognized as a source of news discovery for the first time. Additionally, OpenAI's own usage research and impartial traffic data indicate that information-seeking is a core use of these assistants, not just a side feature. We count assistant-assisted discovery as part of the search market. In that case, Google's field of competitors now includes OpenAI/ChatGPT, Microsoft's Copilot, Perplexity, and specialized assistants, each vying for the first interaction with a question and the final interaction before an action is taken. This changes the discussion around remedies. Default settings on browsers and phones still matter, but control over model inputs, training data, and real-time content licensing determines the speed and quality of responses. Those who secure scalable rights to news, maps, product catalogs, academic abstracts, and local listings—not just access to a search bar—will dictate the pace of competition.

In this context, the September decision appears less as the end of competition policy and more as a reflection on the kind of competition policy we've tried to apply. The court's ruling against a breakup, combined with stricter conduct rules, will not halt the shift from queries to prompts. It does little to lower the significant barriers in artificial markets, such as computing costs, access to advanced models, and the cost of acquiring legal data. Regulators outside the U.S. have begun to change their strategies. The EU's Digital Markets Act has mandated choice screens and some unbundling, with early indications of modest gains for browsers. At the same time, the U.K.'s new DMCC framework empowers the CMA to set specific conduct rules for firms with "strategic market status." These tools are proactive and sector-specific. They move faster and can be modified to address assistant-related issues. The U.S. does not need a mirror image of these strategies, but rather a plan that recognizes assistance as a key component of the competitive landscape—not just search boxes.

From punishment to interoperability: a new plan

If "antitrust is dead," it is only because adapting structural fixes rarely changes behavior quickly enough. The alternative is not deregulation; it is making interoperability a policy priority. Begin with data. Courts can compel disclosures in specific areas, but policymakers should establish licensed data-sharing platforms for categories essentially treated as public goods: geography, civic information, public safety, and basic product information. Pair mandatory licensing at fair, transparent rates with open technical "ports"—stable APIs, standardized formats, and audit logs—so any qualified assistant can connect. This would reduce the importance of exclusive vertical integrations and shift the advantage to the interface, not the inputs. The CMA's work on foundational models is a valuable example, citing access to computing and data as major hurdles and proposing principles for fair access. Turning those principles into law and procurement measures would give challengers a chance that doesn't rely on rare and lengthy breakups.

Next, address defaults where they still impact users, but measure success based on switching costs, not just "choice screens." The EU has demonstrated that choice screens can help, yet design and implementation are crucial; earlier versions were awkward and had inconsistent effects. Make defaults portable: a user's chosen assistant should be accessible across devices and applications unless they opt out. Require one-tap rerouting for any query field to the user's current default assistant, and prohibit contract terms that penalize manufacturers for offering rival options upfront. Importantly, audit the interactions when AI summaries appear above links. If Google's summaries lead to fewer downstream clicks, require the disclosure of metrics, and offer parity options for rival answer modules, while ensuring clear source labeling. The goal is not to hinder Google's progress but to prevent a single gatekeeper from dominating the interface when assistants are designed to be interchangeable.

At the same time, we need to stop acting as if ad tech and discovery are separate entities. The ongoing phase of ad-tech remedies will affect who can finance free assistants on a large scale. A transparent, auditable auction with open interfaces enables competitors to purchase distribution without relying on an incumbent's opaque systems. If, as the DOJ contends, parts of Google's ad structure have been unlawful monopolies, the solution must include not only structural options but also opening auction rules and ensuring third-party assistants have fair access commitments that facilitate commercial traffic. The markets for attention fuel the engines; cutting off the toll booth reduces the incumbents' ability to subsidize exclusive defaults. This approach is more likely to promote competition than splitting Chrome and Android in 2025, especially when the bottleneck has shifted to the ad auction layer and the answer layer.

This reinforces the need for a public options mindset where the government acts as a buyer. Educational institutions, healthcare, and local governments can set procurement terms that ensure openness: any AI assistant serving students or civil servants must allow users to export conversations, publish its model/version watermark, and accept a standard set of content sources (curricula, research, local services) through APIs for others to use. If implemented on a large scale—think state systems and school districts—vendors will adapt. Regulators have spent years debating the theory of harm; buyers can establish a theory of change in contracts right now. Brookings has advocated for a forward-looking policy framework following this ruling; the task is to make that framework practical: interoperability instead of punishment, contracts rather than courtrooms, and speed over spectacle.

Figure 2: Rapid adoption signals an assistant-first shift in discovery; policy should track assistant share of sessions, not just search-engine market share.

Finally, let's talk about measurement. Policymakers often wait for changes in search engine share to indicate progress. However, by the time StatCounter shows a "golden cross," assistants will have won the mindshare that really matters. Instead, we should track the share of discovery that involves assistants—how often information-seeking sessions start with an assistant, how many times the assistant's summary ends the journey, and how many clicks to competing sites follow. Early indications suggest a trend in this direction: ChatGPT is widely available as a search tool, desktop search usage has begun to incline toward Bing, and chatbots are now a measurable source of news. None of this indicates that the incumbent is doomed. It highlights how the monopoly framework used in the 2010s underestimates where power resides in the 2020s: the interface that answers first and best. Our regulations should follow user behavior.

The initial statistic—2.5 billion prompts daily—illustrates why the debate over breakups seems irrelevant now. The court's decision not to split Chrome or Android will neither strengthen nor destroy competition. Competition has shifted to assistants while the case progressed through the courts. Suppose lawmakers aim for a competition that benefits students, teachers, and families. In that case, they need to stop recycling outdated structural solutions and create new avenues, such as licensed data-sharing commons, transparent ad auctions, portable defaults, and procurement that emphasizes openness and verifies it. Antitrust, in its traditional, slow manner, will continue to act as a safety net for egregious behavior. However, the focus should be on creating rules that make it easy to switch and integrate seamlessly. By ensuring the interface is competitive, we won't need to rely solely on link counts to understand if the market is working. If we continue to focus exclusively on breakups, we'll arrive too late, just as the market changes again. Time won't slow down for us; our methods must speed up to keep pace.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Barron's. (2025, September 2). U.S. judge rejects splitting up Google in major antitrust case.
Brookings Institution (Tom Wheeler & Bill Baer). (2025, September). Google decision demonstrates need to overhaul competition policy for AI era.
Competition and Markets Authority (U.K.). (2024, April 11). AI foundation models: Update paper.
European Commission. (n.d.). The Digital Markets Act: Ensuring fair and open digital markets.
Google. (2025, September 2). Our response to the court's decision in the DOJ Search case.
OpenAI. (2024, July 25; updates through February 5, 2025). SearchGPT prototype and Introducing ChatGPT search.
Pew Research Center. (2025, June 25). 34% of U.S. adults have used ChatGPT, about double the share in 2023.
Reuters Institute for the Study of Journalism. (2025, June 17). Digital News Report 2025.
Reuters. (2024, April 10). EU's new tech laws are working; small browsers gain market share.
StatCounter Global Stats. (2025, August). Search engine market share worldwide; Desktop search engine market share worldwide.
U.S. Department of Justice. (2025, September 2). Department of Justice wins significant remedies against Google.
9News/CNN. (2025, September 3). Google will not be forced to sell off Chrome or Android, judge rules in landmark antitrust ruling.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

When the Feed Is Poisoned, Only Proof Can Heal It

When the Feed Is Poisoned, Only Proof Can Heal It

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

Trusted news wins when fakes surge
Make “proof” visible—provenance, corrections, and methods—not just better detectors
Adopt open standards and clear labels so platforms, schools, and publishers turn credibility into a product feature

By mid-2025, trust in news is about 40% across 48 countries. This figure has remained steady for three years, even as more people get their news from quick, engaging social video. In 2020, 52% of people consumed news via social video; this year, it’s 65%. The amount of content has surged, but trust has not. This stagnation presents an opportunity for policy change. When audiences confront the reality of AI-generated misinformation, they do not just feel despair; many turn to transparent brands—those where the source, corrections, and accountability are transparent, verifiable, and consistent. In the coming decade, news strategy will depend less on the best detection models and more on making “trust” a visible and verifiable product feature. In a media landscape flooded with low-quality content, credibility becomes rare, and scarcity drives up its value.

While the battle against misinformation is often likened to a race, with better detection tools pitted against more convincing fakes, it's crucial to remember that this race is not the only strategy. Branding, the public promise of methods and standards, takes center stage. If the public often verifies questionable claims by turning to “a news source I trust,” then policy should encourage newsrooms to compete on proof, not just on speed or scale. A brand becomes more than just a logo; it becomes a reliable process, a record of corrections, and a consistent user experience that helps readers quickly identify what is original, what is fake, and what has been confirmed. This is the potential of branding, turning today’s anxiety into future loyalty and hope for a more credible media landscape.

The Scarcity Premium of Trust

Recent evidence from Germany supports this idea. In an experiment with Süddeutsche Zeitung, readers were shown a message highlighting how difficult it is to distinguish authentic images from AI-generated ones. The immediate result was mixed: concern about misinformation increased by 0.3 standard deviations, and trust in news dropped by 0.1 s.d., even for SZ itself; however, behavior diverged from sentiment. In the days following this exposure, daily visits to SZ rose by about 2.5%. Five months later, subscriber retention was 1.1% higher—a one-third drop in churn. This means that when the danger of fake content becomes clear, the value of a brand that helps users sort truth from falsehood increases, and some readers reward that brand with their time and money. Method note: these effects are intention-to-treat estimates from random assignment; browsing and retention are actual outcomes, not self-reports.

Figure 1: AI treatment lowered self-reported trust slightly across platforms yet strengthened long-term engagement with a high-reliability outlet. Error bars show 95 % confidence intervals; values are in standard-deviation units.

This pattern reflects broader audience behavior. The Reuters Institute’s 2025 report shows overall trust remaining around 40%, with more people turning to social video. Critically, “a news source I trust” emerged as the top source for verifying questionable claims (38%), far ahead of AI tools. For policymakers and publishers, this means they should not abandon AI, but should focus editorial investment on what the public already uses to determine what is real: brand reputation and clear standards. When credibility is rare, its value—in terms of visits, retention, and willingness to pay—increases when contamination is evident. The market signal is clear: compete on credibility, not just on the volume of content. This emphasis on the importance of competition in establishing credibility motivates the audience to contribute to a more trustworthy media environment.

Branding as Infrastructure, Not Cosmetics

If branding is to play a bigger role, it cannot be superficial. It must be foundational: provenance by default, easy-to-find corrections, and apparent human oversight. Progress is being made in adopting open standards. The Coalition for Content Provenance and Authenticity (C2PA) is establishing a growing ecosystem: Adobe’s Content Authenticity Initiative surpassed 5,000 members this year; camera makers like Leica now produce devices that embed Content Credentials; major platforms are testing labels based on provenance; and many newsrooms have pilot projects underway. Policy plays a crucial role in fostering this progress by tying subsidies, tax breaks, or public-interest grants to the use of open provenance standards in editorial processes. This emphasis on policy reassures the audience about the potential for change and improvement in the media landscape.

The risks we are trying to manage are no longer hypothetical. A recent comparative case study documents how AI-generated misinformation manifests differently across countries and formats—voice cloning in UK politics, image manipulation and re-contextualization in German protests, and synthetic text and visuals fueling rumors on Brazilian social platforms. Fact-checkers in all three contexts identified similar challenges: the rapid spread on closed or semi-closed networks, the difficulty of proving something is fake after initial exposure, and the resource drain of repeated debunks. A nicer homepage will resolve none of these issues. They will be addressed when the brand promise is linked to verifiable assets: persistent Content Credentials for original content, a real-time correction index where changes are timestamped and signed, and explanations of the reporting that clarify methods, sources, and known uncertainties. Those features transform a brand into a tool for the public.

Remove the Fuel: Labeling, Friction, and Enforcement

A second key element lies outside the newsroom: platform and regulatory policies that reduce the spread of synthetic hoaxes without limiting legitimate speech. China’s new labeling law, effective September 1, 2025, requires that AI-generated text, images, audio, and video have visible labels and embedded metadata (watermarks) on major platforms like WeChat, Douyin, Weibo, and Xiaohongshu, with enforcement by the Cyberspace Administration of China. Regardless of opinions about Chinese information controls, the technical basis is transferable and straightforward: provenance should accompany the file, and platforms should display it by default. Democracies may not—or should not—replicate this speech structure. Still, they can adopt the infrastructure: standardized labels, consistent user interfaces, penalties for deceptive removal of credentials, and transparency reports on mislabeled content.

A democratic version would include three components. First, defaults: platforms would auto-display Content Credentials when available and encourage uploaders to attach them, particularly for political ads and news-related content. Second, targeted friction: when provenance is missing or altered on high-risk content, the system would slow its spread—reducing algorithmic boosts, limiting sharing, and providing context cards that direct users to independent sources. Third, accountability: fines or removals for commercial entities that strip credentials or mislabel synthetic assets, with safe harbors for satire, art, and protected speech that are clearly labeled. This is not an abstract wish list. Standards are available, adoption is increasing, and the public has indicated that when they doubt something, they go to a trusted source. Policy should help more outlets earn that trust.

The education sector has a specific role. Students, teachers, and administrators now interact with AI in various aspects of their information consumption, from homework help to campus rumors. Curricula can be adapted quickly to include lessons on provenance literacy alongside traditional media literacy, such as how to read a Content Credential, verify if a photo has a signature, and differentiate between verified and unverifiable edits. Procurement guidelines for schools and universities can require C2PA-compatible tools for communications teams. Public institutions can create “trust labs” that collaborate with local news organizations to determine which user interface cues—labels, bylines, correction banners—help different age groups distinguish real from synthetic. The goal is not to transform everyone into a forensic expert, but to make the brand signals of trustworthy outlets clear and to teach people how to use them.

Critics may raise several objections. Labels and watermarks can be removed or forged. This is true. However, open standards make removal detectable, and the goal is not perfection; it’s to make truth easier to verify and lies more costly to maintain. Others may argue that provenance favors large incumbents. It might—if adoption costs are high or if the standards are proprietary. That is why policy should support open-source credentialing tools for small and local newsrooms and tie public advertising or subsidies to their use. Skeptics may also claim that audiences will not care. The German experiment suggests otherwise: the visibility of AI fakes diminished self-reported trust but also encouraged real engagement and retention with a trustworthy outlet. Establish the framework, and usage will follow. Finally, some may argue that this focus is misplaced; the real issue is not isolated deepfakes but the constant stream of low-effort misinformation. The solution is both: provenance helps catch serious fraud, and clear brand signals assist audiences in quickly filtering out low-level noise.

Figure 2: Cumulative subscriber losses were lower in the treatment group over 150 days, confirming that making AI fakes salient improves retention.

The final objection comes from those who seek a purely technological solution. However, audiences have consistently indicated in surveys that they feel uneasy about AI-generated news and would prefer to verify claims through sources they already trust. Detection will improve—and should—but making trust apparent is something we can address today. Practically, this means newsrooms must commit to providing user-facing proof: ongoing Content Credentials for original content, a permanent corrections index, and explanations outlining what we know, what we don’t, and how we know it. It also involves setting platform defaults that promote these signals and regulatory measures that penalize deceptive removal or misuse. The aim is not to outsmart the fakers; it’s to out-communicate them.

We began with a persistent statistic—40% trust—against the backdrop of increasing social-video news consumption and an overflow of synthetic content. We conclude with a practical approach: compete on proof. The German experiment shows that when the threat is made clear, trustworthy brands can maintain and even grow audience attention. The public already turns to trusted outlets when facts are uncertain. Standards like C2PA give us the technical basis to ensure authenticity is portable. Even the most rigorous labeling systems being introduced abroad provide a simple lesson: provenance should go with the file and be displayed by default. If education systems, platforms, and publishers work together around these signals, we can regain ground without silencing discussions or expecting extraordinary vigilance from every reader. The cost of fabrication has dropped near zero. The value of trust, however, remains high. Let’s create brands, products, and policies that make that value clear and, even better, easy to choose.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Adobe (Content Authenticity Initiative). (2025, August 8). 5,000 members: Building momentum for a more trustworthy digital world.
Campante, F., Durante, R., Hagemeister, F., & Sen, A. (2025, August 3). GenAI misinformation, trust, and news consumption: Evidence from a field experiment (CEPR Discussion Paper No. 20526). Centre for Economic Policy Research.
Cazzamatta, R. (2025, June 11). AI-Generated Misinformation: A Case Study on Emerging Trends in Fact-Checking Practices Across Brazil, Germany, and the United Kingdom. Journalism & Mass Communication Open, SAGE.
Coalition for Content Provenance and Authenticity (C2PA). (n.d.). About C2PA.
GiJN. (2025, July 10). 2025 Reuters Institute Digital News Report: Eroding public trust and the rise of alternative ecosystems. Global Investigative Journalism Network.
Reuters Institute for the Study of Journalism. (2025, June 17). Digital News Report 2025 (Executive summary & full report). University of Oxford. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025
SCMP. (2025, September 1). China’s social media platforms rush to abide by AI-generated content labelling law. South China Morning Post.
Thomson Reuters. (2024, June 16). Global audiences suspicious of AI-powered newsrooms, report finds. Reuters.
Thomson Reuters. (2025, March 14). Chinese regulators issue requirements for the labeling of AI-generated content. Reuters.
VoxEU/CEPR. (2025, September 16). AI misinformation and the value of trusted news.

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.