Shocks drain savings and push retirees to Medicaid
Make LTC countercyclical: shock-based eligibility, rapid HCBS, reinsurance
Pre-fund modest universal benefits to slow spend-down and keep care at home
AI Political Persuasion Is Easy. Truth Is the Hard Part
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
AI political persuasion shifts views by flooding claims; accuracy falls
Education should require evidence budgets and claim-source ledgers
Policy must enforce accuracy floors, provenance by default, and risk labels
One number stands out: 76,977. That’s how many people participated in recent large-scale experiments to see if conversational AI can change political views. Nineteen different models influenced opinions on 707 issues, and the most significant factor in this shift was not clever psychology or targeted strategies. Instead, it was the density of information; responses packed with claims were more effective. For instance, a simple prompt to “use facts and evidence” increased persuasiveness by about 25% compared to a vague “be persuasive,” while personalization had less impact. However, as these systems became more persuasive, their claims became less accurate, revealing a concerning trade-off: more words lead to greater influence, but also to more errors. If we value democratic learning—whether in classrooms, campus forums, or public media—the main issue isn't whether AI political persuasion works. Instead, the concern is that it works because speed and volume often overshadow truth. In short, the more information AI provides, the more persuasive it is, but the less reliable its claims become.
AI political persuasion is really an accuracy problem dressed up as innovation
It shouldn't surprise us that chatbots can persuade. Offering instant answers and never tiring, they present reasoned responses that seem impartial. Studies this month reveal that their persuasive effects can rival—and at times surpass—those of traditional political advertising. Brief conversations during live electoral tests significantly shifted preferences. Other reports suggest that after a single interaction, about 1 in 25 participants leaned toward a candidate. The format appears neutral, and the tone feels fair. This surface neutrality makes it appealing but also risky.
The force behind AI political persuasion is volume masquerading as objectivity. Information-rich replies seem authoritative. They overwhelm readers with details, creating a false impression of consensus. Significantly, the very techniques that enhance this effect—rewarding responses that sound helpful and convincing—also undermine reliability. These models don’t rely on your data to persuade you; they need room to pile on claims. At scale, this leads to easy persuasion, regardless of whether each claim holds up. The studies highlight this trade-off: when systems get tuned to produce denser, fact-based arguments, their persuasive power increases while their factual accuracy declines. This isn’t a breakthrough about human psychology; it’s an accuracy problem in a new guise.
Figure 1: Post-training and information-dense prompting deliver the biggest persuasion gains; model size and personalization matter far less.
Significant inaccuracy is a critical barrier to democracy
If we agree that persuasion is straightforward, the follow-up question is whether the content is accurate. The evidence here is disheartening. In tests before U.S. elections, major chatbots provided false or misleading answers to more than half of basic voting queries. Many of these errors could harm individuals, leading them to miss deadlines, enter incorrect locations, or fabricate rules altogether. These weren’t obscure systems; they were widely accessible tools. If a civic help line had that level of error, it would face closure. The issue is that AI can produce mistakes in fluent, confident prose, which lowers our defenses.
The trend of large models does not address this. Research reviewed in Nature found that newer, larger chatbots were more likely to deliver wrong answers than to acknowledge uncertainty, and many users often failed to recognize the errors. This finding aligns with insights from persuasion studies: as you push models to produce more claims, they soon exhaust the supply of verified facts and drift into plausible-sounding fiction. In daily use, this results in authoritative summaries that use appropriate language while distorting facts. Here, confidence becomes a style rather than a measure.
There’s also a structural bias at work in safety research, known as sycophancy. When models are tuned to please users based on human feedback, they learn to mirror users' beliefs rather than challenge them. This poses risks in politics. A system that detects your preferences and then “agrees” in polished terms can plant small, tailored untruths that feel personal. This isn’t microtargeting as it used to be understood. It’s alignment with your beliefs, executed on a large scale while appearing objective. Even when the latest persuasion research finds minimal benefits from explicit personalization, the broader tendency to favor agreement persists. This leads to substantial inaccuracy: make it dense and agreeable, and the facts become optional.
Figure 2: More than half of answers were inaccurate; a large share was harmful or incomplete—showing the risk of fluent error at scale.
What education systems should do now
Education has a crucial role in mitigating these issues. We don’t need to ban AI political persuasion in classrooms to manage its harms. Instead, we should set rules that prioritize truth. Begin with evidence budgets. Any AI-assisted political text used in courses should have a strict requirement: every claim must link to a source that students can verify in under two clicks. If a source cannot be provided, the claim won’t count toward the assignment. This shifts the focus. Density alone won't earn points; verifiability will. Coupled with regular checks against primary sources—like statutes, reports, and peer-reviewed articles—this approach reduces the chance that a polished paragraph represents a vector for error. The main takeaway here is that educational standards should reward verifiable claims, not just persuasive writing.
Next, implement a claim-source ledger for all assignments involving AI-generated political content. This ledger should be a simple table, not an essay: claim, link, retrieval date, and independent confirmation note. It should accompany the work. Administrators can make this a requirement for campus tools; any platform that can’t produce a ledger shouldn’t be used. Instructors can grade the ledger separately from the writing to highlight the importance of accuracy. Over time, students will realize that style doesn’t equal truth. This practice is intentionally mundane because it helps establish habits that counter fluent fiction.
Third, teach students about failure modes. Lessons on sycophancy, hallucination, and the trade-off between volume and persuasion should be integrated into civics and media literacy courses. The aim isn’t technical mastery but recognition. Students who understand that “agreeable and dense” signals something wrong will pause to think. They will ask for the ledger and search for primary sources. This isn’t a quick fix, but it shifts the default from trust to verification in areas that will shape the next generation of voters.
Standards that prioritize truth over volume
Policy should address the gap between what models can do and what democratic spaces can handle. First, set a minimum accuracy requirement for any system allowed to answer election or public policy questions at scale, and test it through independent audits on live, localized queries. If a system doesn’t meet this standard, limit its outreach on civic topics: shorter answers, more disclaimers, and a visible “low-confidence” label. This isn’t censorship; it’s the same safety principle we apply to medical devices and food labeling, applied to information infrastructure.
Second, automatically include source information. For civic and educational use, persuasive outputs should consist of source lists, timestamps, and a brief explanation of the model’s limitations. Platforms already can do this selectively; the rule is to do it every time public life is involved. Where models are designed to produce dense, evidence-based outputs, ensure that those “facts” link to traceable records. If they don’t, the model should clearly state that and stop. Research shows that “information-dense” prompts enhance persuasion; the standard should make density conditional on proof.
Third, create a persuasion-risk label for use on campuses and in public applications. If a system increases persuasion significantly after training, the public deserves to know. Labels should disclose known trade-offs and expected error rates for political topics. They should also clarify mitigation efforts: how the system avoids sycophancy, how it addresses uncertainty, and what safeguards are in place when users ask election questions. A transparency system like this encourages a market shift toward features that promote accuracy, rather than just style, and gives educators a grounded framework for selecting tools.
A reality check for the “neutral” voice
Some may argue that if real-world persuasion effects are minor, there’s no need to worry. Field studies and media reports indicate that attention is limited, and the impact outside controlled settings may be less significant. While this is true, it overlooks the cumulative effect of AI political persuasion. Thousands of small nudges, each delivered in a neutral tone, accumulate. The risk isn’t a single overwhelming speech. It lies in a constant stream of convincing half-truths that blurs the line between learning and lobbying. Even if one chatbot conversation sways only a few individuals, repeated exposure to dense information without verification can shift norms toward speed over accuracy.
Another argument suggests that more information is always beneficial. The best counter is to use evidence. The UK’s national research program on AI persuasion found that “information-dense” prompts do boost persuasive power—and that the same tuning lessens factual accuracy. This pattern is evident in journalistic summaries and official blog posts: pushing for more volume leads to crossing an invisible line into confident errors. The solution, then, isn’t to shy away from information. It is to demand a different standard of value. In civic spaces, the proper measure isn’t words per minute, but verified facts per minute. This metric should be integrated into teaching, procurement, and policy.
The key takeaway isn’t that chatbots can sway voters. For decades, we’ve known that well-crafted messages can influence people and that larger scales extend reach. What’s new is the cost of that scale in the AI era. The latest research shows that persuasion increases when models are trained to bombard us with claims, while accuracy declines when those claims exceed available evidence. This isn’t a pattern that will self-correct; it’s a consequence of design. The response must be a designed countermeasure. Education can take the lead: establish evidence budgets, make claim-source ledgers a habit, and help students understand the limits of “neutral” voices. Policy can follow with accuracy standards, automatic source tracking, and risk labels that acknowledge these trade-offs. If we do this, next time we hear a confident, fact-filled response to a public issue, we won’t just ask about the smoothness of the delivery. We’ll question the source of the information and whether we can verify it. That approach will ensure that AI-driven political persuasion supports democracy rather than undermining it.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Angwin, J., et al. (2024). Seeking Reliable Election Information? Don’t Trust AI. Proof News & Institute for Advanced Study. Retrieved Feb. 27, 2024.
Anthropic. (2023). Towards Understanding Sycophancy in Language Models. Research blog.
Associated Press. (2024). Chatbots’ inaccurate, misleading responses about U.S. elections threaten to keep voters from polls. Feb. 27, 2024.
Béchard, D. E. (2025). AI Chatbots Are Shockingly Good at Political Persuasion.Scientific American, Dec. 2025.
Guardian, The. (2025). Chatbots can sway political opinions but are ‘substantially’ inaccurate, study finds. Dec. 4, 2025.
Hackenburg, K., et al. (2025). The levers of political persuasion with conversational AI.Science (Dec. 2025), doi:10.1126/science.aea3884. See summary by AISI.
Jones, N. (2024). Bigger AI chatbots more inclined to spew nonsense—and people don’t always realize.Nature (News), Oct. 2, 2024.
Kozlov, M. (2025). AI chatbots can sway voters with remarkable ease—Is it time to worry?Nature (News), Dec. 4, 2025.
Lin, H., et al. (2025). Persuading voters using human–artificial intelligence dialogues.Nature, Dec. 4, 2025, doi:10.1038/s41586-025-09771-9.
UK AI Safety Institute (AISI). (2025). How do AI models persuade? Exploring the levers of AI-enabled persuasion through large-scale experiments. Blog explainer, Dec. 2025.
Washington Post. (2025). Voters’ minds are hard to change. AI chatbots are surprisingly good at it. Dec. 4, 2025.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Chinese EVs are cheaper due to scale, supply chains, and subsidies
Targeted EU tariffs and price floors correct subsidy-driven undercutting
Link trade defense to localization, skills, and investment to protect competitiveness
UK GDP per head is 6–8% below its no-Brexit path
The slow-burn hit was masked by transition rules and the pandemic
Without lower frictions and restored mobility, the drag endures; the U.S.
When Algorithms Say Nothing: Fixing Silent Failures in Hiring and Education
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI tools exclude people through missing data and bugs
Count “no-decision” cases and use less-exclusionary methods with human review
Set exclusion budgets, fix data flows, and publish exclusion rates
A quiet fact sets the stakes. In 2024, a global HR survey found that 53% of organizations already use AI in recruitment, often to filter résumés before a human ever reviews them. Yet in a recent, legally required audit of a résumé-screening tool, 11.4% of applicant criteria were labeled “uncertain” and excluded from analysis. In another audit of a central matching platform, over 60 million applications had unknown race or ethnicity fields, making fairness hard to test and easy to overlook. When systems cannot decide or measure, people fall out of view. This is algorithmic exclusion, and it is not a rare issue; it is a structural blind spot that eliminates qualified applicants, obscures harm, and weakens trust in AI across education and work. We can reduce bias only if we also address silence—those moments when the model returns nothing or hides behind missing data-so stakeholders feel reassured that no one is invisible.
Algorithmic exclusion is systemic, not edge noise
The familiar story about unfair algorithms focuses on biased predictions. But exclusion starts earlier. It begins when models are designed to prioritize narrow goals like speed, cost, or click-through rates, and are trained on data that excludes entire groups. It worsens when production code is released with common software defects. These design, data, and coding errors together push people outside the model’s view. Leading institutions now recognize “no prediction” as a significant harm. A recent policy proposal suggests that regulations should recognize algorithmic exclusion alongside bias and discrimination, as sparse, fragmented, or missing data consistently yield empty outputs. Suppose the model cannot “see” a person’s history in its data. In that case, it either guesses poorly or refuses to think entirely. Both outcomes can block access to jobs, courses, credit, and services—and neither situation shows up if we audit only those who received a score.
Exclusion is measurable. In face recognition, the U.S. National Institute of Standards and Technology has long recorded demographic differences in error rates across age and sex groups; these gaps persist in real-world conditions. In speech-to-text applications, studies show higher error rates for some dialects and communities, affecting learning tools and accessibility services. In hiring, places like New York City now require bias audits for AI-assisted selection tools. However, most audits still report only pass/fail ratios for race and gender, often excluding records with “unknown” demographics. This practice can obscure exclusion behind a wall of missing data, making it crucial for stakeholders to understand the silent failures that undermine fairness and transparency.
Figure 1: Lower broadband and smartphone access among older and lower-income groups raises the odds of “no-decision” events in AI-mediated hiring and learning; connectivity gaps become exclusion gaps.
Algorithmic exclusion in hiring and education reduces opportunity
The numbers are precise. One résumé-screening audit from January to August 2024 examined 123,610 applicant criteria and reported no formal disparate impact under the four-fifths rule. However, it also showed that 11.4% of the applicant criteria were marked “uncertain” and excluded from the analysis. Among the retained records, selection rates differed significantly: for instance, 71.5% for White applicants versus 64.4% for Asian applicants at the criterion level. Intersectional groups like Asian women had impact ratios in the mid-80s. These gaps may not reach a legal threshold, but they indicate drift. More critically, the excluded “uncertain” pool poses a risk: if the tool is more likely to be uncertain about non-linear careers, school re-entrants, or people with fragmented data, exclusion becomes a sorting mechanism that no one chose—and no one sees.
Figure 2: Higher renter rates for Black and Hispanic households signal greater address churn and thinner administrative records—conditions that inflate “unknown” fields and trigger algorithmic exclusion.
Scale amplifies the problem. A 2025 audit summary for a large matching platform listed over 9.5 million female and 11.9 million male applicants, but also recorded 60,263,080 applications with unknown race or ethnicity and more than 50 million with at least one unknown demographic field. If fairness checks depend on demographic fields, those checks become weakest where they are most needed. Meanwhile, AI in recruiting has become common: by late 2024, just over half of organizations reported using AI in recruitment, with 45% specifically using AI for résumé filtering. Exclusion at a few points in extensive talent funnels can therefore deny thousands of qualified applicants a chance to be seen by a human.
Regulators are taking action, but audits must improve transparency to reassure stakeholders that fairness is being actively monitored. The EEOC has clarified that Title VII applies to employers’ use of automated tools and highlighted the four-fifths rule for screening disparities. The OFCCP has instructed federal contractors to validate AI-assisted selection procedures and to monitor potential adverse impacts. New York City’s Local Law 144 requires annual bias audits and candidate notification. These are fundamental steps. However, if audits lack transparency about “unknown” demographics or “uncertain” outputs, exclusion remains hidden. Education systems face similar issues: silent failures in admissions, course placement, or proctoring tools can overlook learners with non-standard records. Precise, transparent measurement of who is excluded is essential for effective policy and practice.
Fixing algorithmic exclusion requires new rules for measurement and repair
First, count silence. Any audit of AI-assisted hiring or educational tools should report the percentages of “no decision,” “uncertain,” or “not scored” outcomes by demographic segment and by critical non-demographic factors such as career breaks, school changes, or ZIP codes with limited broadband. This specific measurement approach helps stakeholders identify hidden biases. It also discourages overconfident automation: if your tool records a 10% “uncertain” rate concentrated in a few groups, having a human in the loop is not just a courtesy; it is a safety net. The Brookings proposal to formalize algorithmic exclusion as a category of harm provides regulators a tool: require “exclusion rates” in public scorecards and classify high rates as non-compliance without a remediation plan.
Second, make less-discriminatory alternatives (LDAs) the standard practice. In lending, CFPB supervision now scrutinizes AI/ML credit models, and consumer groups advocating for LDA searches argue these should be part of routine compliance. This same logic applies to hiring and education. If a ranking or filtering algorithm shows negative impacts or high exclusion rates, stakeholders should test and document an equally effective method that excludes fewer candidates. This could involve deferring to human review for edge cases, replacing rigid résumé parsing with skills-based prompts, or using structured data from standardized assignments instead of unclear proxies. Prioritizing the reduction of exclusion while ensuring job-relatedness guides organizations toward fairer AI practices.
Third, fix the code. Many real-world harms stem from ordinary defects, not complex AI math. The best estimate of the U.S. cost of poor software quality in 2022 is $2.41 trillion, with defects and technical debt as significant factors. This impacts education and HR. Flaws in data pipelines, parsing, or threshold logic can quietly eliminate qualified records. Organizations should conduct pre-deployment defect checks across both the data and decision layers, not just the model itself. Logging must ensure unscored cases are traceable. And when a coding change raises exclusion rates, it should be rolled back as we would treat security regressions. Quality assurance directly relates to fairness assurance.
What schools and employers can do now to reduce algorithmic exclusion
Educators and administrators should identify where their systems fall short in delivering results. Begin with placement and support technologies: reporting dashboards must indicate how many students the system cannot score and for whom. If remote proctoring, writing assessments, or early-alert tools have trouble with low-bandwidth connections or non-standard language varieties, prioritize routing those cases to human review. Instructors should have simple override controls and a straightforward escalation process. At the same time, capture voluntary, privacy-respecting demographic and contextual data and use it only for fairness monitoring. Suppose the “unknown” category is large. In that case, it signals a need to improve data flows, not a reason to disregard it in analysis.
Employers can take a similar approach. Require vendors to disclose exclusion rates, not just impact ratios. Refuse audits that omit unknown demographics without thorough sensitivity analysis. For high-stakes roles, set an exclusion budget—the maximum percentage of applicants who can be auto-disqualified or labeled “uncertain” without human review. Use skills-based methods and structured work samples to enlarge the data footprint for candidates with limited histories and log instances when the system cannot parse a résumé, allowing candidates to correct their records. Lastly, follow legal regulations and the direction of change: comply with New York City’s bias-audit rules, the EEOC's Title VII disparate-impact guidelines, and the OFCCP's validation expectations. This is not mere compliance; it represents a shift from one-time audits to ongoing monitoring of the areas where models often falter.
Measure the silence, rebuild the trust
The key statistic that should dominate every AI governance discussion is not only who passed or failed. It is those who were not scored—the “uncertain,” the “unknown,” the missing individuals. In 2024–2025, we learned that AI has become integral in the hiring process for most firms, even as audits reveal significant gaps in unmeasured applicants and unresolved ambiguities. These gaps are not mere technicalities. They represent lost talent, overlooked students, and eroded policy legitimacy. By treating algorithmic exclusion as a reportable harm, establishing exclusion budgets, and mandating less-exclusionary alternatives, we can keep automation fast while ensuring it is fair. This should be the guiding principle for schools and employers: ongoing measurement of silent failures, human review for edge cases, and transparent, public reporting. We do not have to wait for the perfect model. We can take action now to recognize everyone that the algorithms currently overlook.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ashby. (2024, August). Bias Audit for Ashby’s Criteria Evaluation model. FairNow. Consumer Financial Protection Bureau. (2024, June). Fair Lending Report of the Consumer Financial Protection Bureau, FY 2023. Consumer Reports & Consumer Federation of America. (2024, June). Statement on Less Discriminatory Algorithms. Eightfold AI. (2025, March). Summary of Bias Audit Results (NYC LL 144). BABL AI. Equal Employment Opportunity Commission. (2023). Assessing adverse impact in software, algorithms, and AI used in employment selection procedures under Title VII. HR.com. (2024, December). Future of AI and Recruitment Technologies 2024–25. New York City Department of Consumer and Worker Protection. (2023). Automated Employment Decision Tools (AEDT). NIST. (2024). Face Recognition Technology Evaluation (FRTE). OFCCP (U.S. Department of Labor). (2024). Guidance on federal contractors’ use of AI and automated systems. Tucker, C. (2025, December). Artificial intelligence and algorithmic exclusion. The Hamilton Project, Brookings Institution. Consortium for Information & Software Quality (CISQ). (2022). The Cost of Poor Software Quality in the U.S. Koenecke, A., Nam, A., Lake, E., et al. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Ageing Europe needs an optimal immigration level
Set ~0.6–1.0% yearly, adjusted by jobs, housing, and language
Link flows to capacity and invest to keep growth and trust
Closing gender and 60+ gaps offsets ageing
Frontiers need more hours; laggards need jobs and childcare
Use 5-year targets, neutral taxes, late-career training
In 2024, the European Union still had a gender employment gap of 1
Beyond the Ban: Why AI Chip Export Controls Won’t Secure U.S. AI Leadership
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
Export controls slow foes, not secure leadership
Invest in compute, clean power, talent
Make NAIRR-style Compute Commons permanent
U.S. data centers used about 183 terawatt-hours of electricity in 2024, roughly 4% of total U.S. power consumption. This figure is likely to more than double by 2030. The United States already accounts for 45% of global data-center electricity use. These numbers reveal a clear truth: the future of American AI relies less on what leaves our ports and more on whether we can provide the computing power, energy, and talent needed to build and use systems domestically. While AI chip export controls may seem practical, they are not the deciding factor in who leads. They can slow rivals marginally, but do not build labs, wire campuses, or train students. We must focus on transforming energy, infrastructure, and education into a self-reinforcing system. With every semester we delay, the cost of missed opportunities increases, and our advantage shrinks.
AI chip export controls are a blunt tool
Recent proposals suggest a 30-month ban on licensing top accelerators to China and other adversaries, formalizing and extending the Commerce Department’s rules. The goal is straightforward: restrict access to advanced chips and make competitors fall behind. However, the policy landscape is already complicated. In 2023 and 2024, Washington tightened regulations; in 2025, Congress discussed new “SAFE CHIPS” and “Secure and Feasible Exports” bills; and the House considered a GAIN AI Act adding certification requirements for export licenses. These measures mainly solidify what regulators are already doing. They increase compliance costs but may complicate enforcement. They could also provoke reciprocal actions abroad and push trade into unclear channels, making it harder to monitor.
These unclear channels are real. According to a Reuters report, between April and July 2025, more than $1 billion worth of Nvidia AI chips entered China via black market channels despite strict U.S. export restrictions. Export-compliant “China-only” chips came and went as rules changed; companies wrote down their inventory; and one supplier reported no sales of its redesigned parts to China in a recent quarter after new licensing requirements were enforced. Controls produce effects, but those effects are messy, leaky, and costly domestically. In summary, AI chip export controls disappoint security advocates while imposing significant collateral damage at home.
Figure 1: The U.S. carries the largest load; leadership depends on turning that load into learning.
Competition, revenue, and the U.S. innovation engine
There’s a second issue: innovation follows scale. U.S. semiconductor firms allocate a significant portion of their revenue to research and development, averaging around 18% in recent years and 17.7% in 2024. One leading AI chip company spent about $12.9 billion on R&D in 2025, despite rising sales, which lowered that percentage. That funding supports new architectures, training stacks, and tools that benefit universities and startups. A shrinking global market, as mandated by law, threatens the reinvestment cycle, particularly for suppliers and EDA tool companies that rely on the growth of system integrators.
However, supporters point out that the U.S. still commands just over 50% of global chip revenues, which they believe is strong enough to sustain industry leadership, according to the Semiconductor Industry Association. They cite record data-center revenue, long waitlists, and robust order books. All of this is true—and it underscores the argument. When companies are forced to withdraw from entire regions, they incur losses on stranded “China-only” products and experience margin pressure. Over time, these challenges affect hiring strategies, supplier decisions, and university partnerships. One research estimate shows China’s share of revenue for a key supplier dropping to single digits by late 2025; another quarter included a multi-billion-dollar write-off related to changing export rules. Innovation relies on steady cash flow and clear planning goals. AI chip export controls that fluctuate year to year do the opposite. The real question is not “ban or sell.” It’s about minimizing leakage while maintaining domestic growth.
Figure 2: Smuggling value is large, but the domestic write-down cost is far larger.
China’s catch-up is real, but it has limits
China is making rapid progress. Domestic accelerators from Huawei and others are being shipped in volume; SMIC is increasing its advanced-node capacity; and GPU designers are eager to go public on mainland exchanges. One firm saw its stock rise by more than 400% on debut this week, thanks to supportive policies and local demand. Nevertheless, impressive performance on paper does not necessarily translate into equal capability in practice. Reports highlight issues with inter-chip connectivity, memory bandwidth, and yields; Ascend 910B production faces yield challenges around 50%, with interconnect bottlenecks being just as significant as raw performance for training large models. China can and will produce more, but its path remains uneven and costly, especially if it lacks access to cutting-edge tools. A CSIS report notes that when export controls are imposed, as seen with China, the targeted country often intensifies its own development efforts, which could lead to significant technological breakthroughs. This suggests that while export controls can increase challenges and production costs, they do not necessarily prevent a country from closing the competitive gap.
Where controls create barriers, workarounds appear. Parallel import networks connect orders through third countries; cloud access is negotiated; “semi-compliant” parts proliferate until rules change again. This ongoing dynamic strengthens China’s motivation to develop domestic substitutes. In essence, strict bans can accelerate domestic production when they are imposed without credible, consistent enforcement and without additional U.S. investments that push boundaries in a positive direction. In 2024–2025, policymakers proposed new enforcement measures, such as tamper-resistant verification and expanded “validated end-user” programs for data centers. This direction is right: smarter enforcement, fewer loopholes, and predictability, along with significant investment in American computing and power capacity for research and education.
An education-first industrial policy for AI
If the United States aims for lasting dominance, it needs a national education and computing strategy that can outpace any rival. The NAIRR pilot, initiated in 2024, demonstrated its effectiveness by providing researchers and instructors with access to shared computing and modeling resources. By 2025, it had supported hundreds of projects across nearly every state and launched a 'Classroom' track for hands-on teaching. This is more important than it may seem. Most state universities cannot afford modern AI clusters at retail prices or staff them around the clock. Shared infrastructure transforms local faculty into national contributors and provides students with practical experience with the same tools used in industry. Congress should make NAIRR permanent with multi-year funding, establish regional 'Compute Commons' driven by public universities, and link funding to inclusive training goals. According to a report from the University of Nebraska System, the National Strategic Research Institute (NSRI) has generated $35 in economic benefits for every $1 invested by the university, illustrating a strong return on investment. This impressive outcome underscores the importance of ensuring that all students in public programs can access and work with real hardware as a standard part of their education, rather than as an exception.
Computing without power is just a theory. AI demand is reshaping the power grid. U.S. computing loads accounted for about 4% of national electricity in 2024 and are projected to more than double by 2030. Commercial computing now accounts for 8% of electricity use in the commercial sector and is growing rapidly. One utility-scale deal this week included plans for multi-gigawatt campuses for cloud and social-media operators, highlighting the scale of what lies ahead. Federal and state policy should view university-adjacent power as a strategic asset: streamline connections near public campuses, create templates for long-term clean power purchase agreements that public institutions can actually sign, and prioritize transmission lines that connect "Compute Commons" to low-carbon energy sources. By aligning clean-power initiatives with campus infrastructure, we could spark the development of regional tech clusters. This would not only enhance educational capabilities but also attract industry partners eager to capitalize on a well-connected talent pool. The talent-energy feedback loop becomes essential, creating synergies that can broaden support far beyond energy committees. When AI chip export controls dominate the discussion, this vital bottleneck is often overlooked. Yet it determines who can teach, who can engage in open-science AI, and who can graduate students with real-world experience using production-grade systems.
Leadership is created, not blocked. It thrives in labs, on the grid, and within public universities, preparing the next generation of innovators. To ensure continued progress and maintain our competitive edge, we must embrace policies that amplify our strengths while fostering innovation. It is critical to envision and reach the next policy milestone that inspires collective action and drives us toward a future where American leadership in AI is unassailable.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bloomberg. (2025, Dec. 4). Senators Seek to Block Nvidia From Selling Top AI Chips to China. Bloomberg
Brookings Institution. (2025, Dec. 3). Why the GAIN AI Act would undermine US AI preeminence. Brookings
CSET (Georgetown). (2024, May 8). The NAIRR Pilot: Estimating Compute. CSIS. (2024, Dec. 11). Understanding the Biden Administration’s Updated Export Controls.
CSIS. (2025, Nov. 6). The Architecture of AI Leadership: Enforcement, Innovation, and Global Trust.
ExecutiveGov. (2025, Dec.). Bipartisan House Bill Seeks to Strengthen Enforcement of AI Chip Export Controls.
Financial Times. (2025, Dec.). Chinese challenger to Nvidia surges 425% in market debut.
Financial Times via Reuters summary. (2025, Jul. 24). Nvidia AI chips worth $1 billion entered China despite U.S. curbs.
IEA. (2024, Jan. 24). Electricity 2024.
IEA. (2025, Apr. 10). Global data centre electricity consumption, 2020–2030 and Share by region, 2024.
IEA. (2025). Energy and AI: Energy demand from AI and Executive Summary.
MERICS. (2025, Mar. 20). Despite Huawei’s progress, Nvidia continues to dominate China’s AI chips market.
NVIDIA. (2025, Feb. 26). Financial Results for Q4 and Fiscal 2025.
NVIDIA. (2025, May 28). Financial Results for Q1 Fiscal 2026. (H20 licensing charge.)
NVIDIA. (2025, Aug. 27). Financial Results for Q2 Fiscal 2026. (No H20 sales to China in quarter.)
Pew Research Center. (2025, Oct. 24). What we know about energy use at U.S. data centers amid the AI boom.
Reuters. (2025, Dec. 8). NextEra expands Google Cloud partnership, secures clean energy contracts with Meta.
Reuters. (2025, Dec. 4). Senators unveil bill to keep Trump from easing curbs on AI chip sales to China.
Semiconductor Industry Association (SIA). (2025, Jul.). State of the U.S. Semiconductor Industry Report 2025.
Tom’s Hardware. (2025, Dec.). Nvidia lobbies White House… lawmakers reportedly reject GAIN AI Act.
U.S. Bureau of Industry and Security (BIS). (2025, Jan. 15). Federal Register notice on Data Center Validated End-User program.
U.S. EIA. (2025, Jun. 25). Electricity use for commercial computing could surpass other uses by 2050.
The Wall Street Journal, LA Times, and other contemporaneous reporting on lobbying dynamics (cross-checked for consistency).
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.