Physical and transition climate risks are orthogonal inside firms
Physical risk is geographic and weather-driven; transition risk is policy- and profit-driven
Teach and regulate with two dashboards, not one, so budgets and actions match each risk
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Published
Modified
AI is making labor borderless as online services surge
Opportunity expands, but standards, audits, and broadband are crucial
Schools must teach task-first skills, platform literacy, and safeguards
The fastest-growing part of global trade is undergoing a profound transformation. In 2023, digitally deliverable services—work sent and received through networks—reached about $4.5 trillion. For the first time, developing economies surpassed $1 trillion in exports. This accounts for more than half of all services trade. Borders still exist, but for more tasks, they function like dimmer switches rather than walls. This is the world of borderless AI labor, a transformative force where millions of people annotate data, moderate content, build software, translate, advise, and teach from anywhere. The rise of large language models and affordable coordination tools has accelerated this shift. Employers can now swiftly create distributed teams, manage them with algorithmic systems, and pay them across time zones. The key question is not whether jobs will shift, but who sets the rules for that shift and how schools and governments prepare people to succeed in it.
The borderless AI labor market is here
The numbers clearly illustrate this trend. In a 2025 survey of employers, 40-41% of firms indicated they would reduce staff where AI automates tasks. At the same time, they are reorienting their business models around AI and hiring for new skills. This change does not spell disaster. By 2030, the best estimate is a net gain of 78 million jobs globally, with 170 million created and 92 million displaced. To put it another way: tasks are moving, talent strategies are changing, and opportunities are opening—often across borders. The borderless AI labor market is not just a shift, but a gateway to unprecedented growth and opportunity.
Figure 1: Global digital services hit $4.5T in 2023; developing economies passed $1T, confirming the rise of borderless AI labor.
Hiring practices are already reflecting this shift. Data from platforms show that remote and cross-border hiring increased through 2024. Over 80% of roles on one global platform were classified as remote, and cross-border hiring rose by about 42% year-over-year. In the United States alone, 64 million people—or 38% of the workforce—freelanced in 2023, many serving clients they never met in person. These trends, coupled with the surge in digitally delivered services, confirm that borderless AI labor has scale and momentum.
Winners and risks in borderless AI labor
The benefits are significant. For the 2 billion people in informal work—over 60% of workers in much of the Global South—AI-enabled matching, translation, and reputation tools can help clarify skills for employers and create opportunities for higher-value tasks. This is the inclusive promise of borderless AI labor, where visibility and trust are essential in global services. When communication, payments, and rating systems are tied to the worker, location becomes less critical, and performance issues become more significant.
However, the risks are real and immediate. The global supply chain for AI, which relies on data labelers and content moderators in cities like Nairobi, Manila, Accra, Bogotá, and others, is a key part of this issue. Over the past two years, research and litigation have highlighted issues like psychological harm, low pay, and unclear practices within this ecosystem. New projects by policy groups are beginning to show how moderation systems can fail in non-English contexts and low-resource languages. The solution is not to withdraw from global work but to set enforceable standards for wages, safety, and mental health support wherever moderation and annotation take place. While automation can filter some content, it cannot replace the need for decent labor conditions.
A subtler risk comes from algorithmic management itself. This refers to the use of AI tools to assign tasks, monitor productivity, and evaluate workers at scale. Recent surveys in advanced economies reveal widespread use of these systems and varied governance. Many companies have policies, but many workers still express concerns about transparency and fairness. In cross-border teams, power imbalances can worsen if ratings and algorithm-based decision-making are unaccountable. The solution is not to ban these tools; it is to make them auditable and explainable, providing clear recourse for workers when mistakes happen.
There is also a broader change. As tasks move online, the demand for office space is shifting unevenly. In the U.S., office vacancies reached around 20% by late 2024, the highest level in decades, with distress in the sector nearing $52 billion. This does not mean cities will die; rather, it indicates a shift away from low-performing spaces toward digital infrastructure and talent. Employers will continue to focus on time zones, skills, and cost—rather than postal codes. Borderless AI labor will keep driving this change.
Rules for a fair marketplace in borderless AI labor
First, establish portable labor standards for digital work. Governments that buy AI services should require contractors—and their subcontracted platforms—to meet basic conditions, including living-wage floors based on purchasing power, mandated rest periods, and access to mental health support for high-risk jobs like content moderation. International cooperation can help align these standards with existing due diligence laws. A credible standard would connect buyers in cities like New York or Berlin to the outcomes for workers in places like Nairobi or Cebu.
Second, include algorithmic management in policy discussions. Regulators should require companies that use automated assignment, monitoring, or evaluation to disclose such practices, document their effects, and provide appeals processes. Evidence shows that companies already report governance measures, but their implementation often outpaces oversight. Clear rules, alongside audits by independent organizations, can reduce bias, limit intrusive monitoring, and protect the integrity of cross-border teams. Where national laws do not apply, public and institutional buyers can enforce contract obligations.
Third, invest in connectivity. An estimated 3 billion people still lack internet access. A rigorous estimate from 2023 places the global broadband investment needed for near-universal access at about $418 billion, mostly in emerging markets. This investment is essential infrastructure for borderless AI labor. Prioritize open-access fiber, shared 4G/5G in rural areas, and community networks. Public funds should promote open competition and fair-use rules so smaller firms and schools can benefit.
Fourth, speed up cross-border payments and credentials. Workers need quick and low-cost methods to get paid, demonstrate their skills, and maintain verified work histories. Trade organizations are already tracking a surge in digital services. The next step is mutual recognition of micro-credentials and compatible identity standards. Reliable certificates and verifiable portfolios allow employers to assess workers based on evidence, not stereotypes—an essential principle in borderless AI labor.
Preparing schools for borderless AI labor
Education systems must shift from a geography-focused model to a task-focused model. Start with language-aware and AI-aware curricula. Students should learn how to use model-assisted tools for translation, summarization, coding, and data cleaning. They also need to judge outputs, cite sources, and respect privacy. Employer surveys indicate substantial investment in AI skills and ongoing job redesign. Schools should reflect this reality with projects that involve actual work for external clients and peers in different time zones, building the trust that borderless AI labor values.
Figure 2: Freelancing is mainstream: 64M Americans—38% of the workforce—freelanced in 2023.
Next, teach platform literacy and how to promote portfolios. Many graduates will have careers that combine organizational roles with platform work. They must understand service-level agreements, dispute processes, rating systems, and the basics of tax compliance across borders. Capstone projects should culminate in public portfolios featuring versioned code, reproducible analyses, and evidence of collaboration across cultures and time zones. Credentialing should be modular and stackable, so a learner in Accra can combine a local certificate with a global micro-credential recognized by the same employer.
Finally, prioritize well-being and ethics. The data work behind AI can be demanding. Students who choose annotation, moderation, or risk analysis need training to handle exposure to harmful content and find pathways to safer, higher-value roles. Programs should normalize mental health support, rotate students out of traumatic tasks, and teach how to create safer processes—such as automated filtering, triage, and escalation—to reduce harm. The goal is not to deter students but to empower them and provide safeguards in a market that will continue to grow.
The world has already crossed a significant threshold. When digitally deliverable services surpass $4.5 trillion and developing economies exceed $1 trillion in exports, we aren’t discussing a future trend; we’re managing a current reality. Borderless AI labor is not just a concept; it involves a daily flow of tasks, talent, and trust—organized by algorithms, traded in global markets, and facilitated over networks. We can let this system develop by default, with weak protections, inadequate training, and increasing disparities. Or we can take action: establish portable standards for digital work, regulate algorithmic management, invest in connectivity, and teach for a task-driven environment. If we choose the latter, the potential is genuine. Millions who have been invisible to global employers can gain recognition, verifiable reputations, and access to safer, better-paying jobs. This is a policy decision—one that schools, ministries, and buyers can act on now, while the new market is still taking shape.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Amnesty International. (2025, April 3). Kenya: Meta can be sued in Kenya for role in Ethiopia conflict. Brookings Institution. (2025, October 7). Reimagining the future of data and AI labor in the Global South. Center for Democracy & Technology (CDT). (2024, January 30). Investigating content moderation systems in the Global South. Deel. (2025, April 13). Global hiring trends show cross-border growth and 82% remote roles. MSCI Real Assets. (2025, July 15). The office-market recovery is here, just not everywhere. Moody’s CRE. (2025, January 3). Q4 2024 preliminary trend announcement: Office vacancy hits 20.4%. OEC D. (2024, Mar.). Using AI in the workplace. OECD. (2025, February 6). Algorithmic management in the workplace. Oughton, E., Amaglobeli, D., & Moszoro, M. (2023). What would it cost to connect the unconnected? arXiv. United Nations Conference on Trade and Development (UNCTAD) via WAM. (2024, December 7). Developing economies surpass $1 trillion in digitally deliverable services; global total $4.5 trillion. Upwork. (2023, December 12). Freelance Forward 2023. World Economic Forum. (2025, January 7). Future of Jobs Report 2025 (press release and digest). World Economic Forum. (2025, May 13). How AI is reshaping the future of informal work in the Global South. World Trade Organization. (2025). Digitally Delivered Services Trade Dataset.
Picture
Member for
1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
AI tongue diagnosis turns a centuries-old check into fast, cheap triage
With phones and clear protocols, it flags likely risks for confirmatory tests
Deploy in primary care with consent, calibration, and monitoring to scale safely
Older adults lose billions to AI elder fraud each year
Biology and deepfake tools amplify impersonation and urgency
Add default friction—holds, verification, and reimbursements—to stop wires before money leaves
Extra information helps only when it adds new, orthogonal signal and is easy to process
Central banks and schools should use plain anchors and concrete rules to trigger a Bayesian information update
Measure belief shifts, not word counts, and iterate when messages fail to move the posterior
Stop Training Coders, Build Scientists: An ASEAN AI Talent Strategy
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
Bootcamps produce tool users, not frontier researchers
ASEAN needs a scientist-first AI talent strategy
Fund PhDs, labs, and compute to invent, not import
The statistic that should jolt us awake is simple: almost 40% of jobs worldwide will be affected by AI, either displaced or reshaped, according to the IMF. This scale of change is historic. It is not merely a coding-bootcamp issue; it is a problem of research capacity. If ASEAN opts for quick courses intended to help generalist software engineers use AI tools, the region risks repeating a mistake others have learned the hard way: preparing people to operate kits while leaving the science to others. The ASEAN AI talent strategy must start with a straightforward premise. Low-cost agents and copilots already handle much of what entry-level coders do, and their subscriptions cost less than a team lunch. If Southeast Asia wants to compete, it needs a pipeline of AI scientists capable of building models, designing algorithms, and publishing groundbreaking work that attracts investment and fosters clusters. This is the only approach that will scale.
Why an ASEAN AI talent strategy must reject “teach-everyone-to-code”
There is a common policy instinct: subsidize short courses that enable existing developers to utilize AI libraries from the cloud. This instinct is understandable. It is quick, appealing, and easy to measure. However, the market has evolved rapidly. Enterprise surveys in 2024–2025 indicate that the majority of firms are now incorporating generative AI into their workflows. Several studies report double-digit productivity gains for tasks like coding, writing, and support. Meanwhile, mainstream tools have lowered the cost of entry-level coding. GitHub Copilot plans run about US$10–$19 per user per month. OpenAI’s small models charge tokens at a rate of cents per million. When the marginal cost of junior tasks approaches zero, bootcamps that teach tool operation become a treadmill. They prepare people for jobs that machines can already perform. The ASEAN AI talent strategy must, therefore, focus on deep capability rather than superficial familiarity.
ASEAN’s higher-education and research base highlights the risk of remaining shallow. Several member states still report very low researcher densities by global standards. A recent review notes 81 researchers per million in the Philippines, 205 in Indonesia, and 115 in Vietnam—figures far below those of advanced economies. Malaysia allocates about 0.95% of GDP to R&D, roughly five times Vietnam’s share. Yet, even in Malaysia, the challenge is creating strong links between universities and industry. Suppose Southeast Asia spends limited resources on short training courses without enhancing research capacity. In that case, it will produce more users of foreign tools rather than scientists who develop ASEAN tools. That is a trap.
Figure 1: Six months of top AI coding assistants cost under US$250—orders of magnitude below a single bootcamp. Upskilling for tool operation can’t beat automation on price.
What South Korea’s bootcamps got wrong
South Korea serves as a warning. For years, policy emphasized rapid scaling of digital training through national platforms and intensive programs aimed at pushing more individuals into “AI-ready” roles. This approach improved literacy and awareness but did not create a surge of frontier researchers. Meanwhile, the political landscape changed. In December 2024, the National Assembly impeached the sitting president; in April 2025, the Constitutional Court removed him from office. The direction of tech policy shifted again. When strategy changes and budgets are adjusted, the only lasting asset is foundational capacity: graduate programs, labs, and a trained scientific base that can withstand disruptions. South Korea’s experience should caution ASEAN against equating bootcamp completion with scientific competitiveness.
There is also a harsh market reality. Employers are already replacing entry-level tasks with AI. A business leader survey conducted by the British Standards Institution in 2024–2025 reveals that firms are embracing automation, even as jobs change or vanish. An increasing number of studies show that copilots expedite routine coding, and some long-term data indicate a decline in junior placement rates at bootcamps compared to the pre-gen-AI era. None of this implies “don’t train.” It emphasizes the need to train for the right frontier. Teaching thousands to integrate existing APIs into apps will not provide a regional advantage when copilots perform the same tasks in seconds. Teaching a smaller number to design new learning architectures, optimize inference under tight energy budgets, or build multilingual evaluation suites will be challenging. That is what investors value.
China’s lesson: build scientists, not generalists
China’s current trajectory illustrates what sustained investment in talent and computing can achieve. In 2025, top universities increased enrollment in strategic fields such as AI, integrated circuits, and biomedicine to meet national priorities. Cities implemented compute-voucher programs that subsidized access to training clusters for smaller firms, transforming idle capacity into research output. China’s AI research workforce has grown significantly over the past decade, while its universities and labs continue to attract leading researchers. The United States still dominates the origin of “frontier” foundation models, according to the 2024 AI Index. Still, the global competition now revolves around who can build and retain scientific teams and who can obtain computing power at reasonable costs. The message for ASEAN is clear. To generate spillover benefits in your cities, invest in scientists and labs, not merely in users of tools. The research core is what anchors ecosystems and can lead to significant economic growth.
Figure 1: Frontier models are concentrated in a few scientific hubs. ASEAN won’t close this gap with bootcamps; it needs labs, PhDs, and compute.
The talent strategy behind that research core is crucial. Elite programs attract top undergraduates into demanding, math-heavy tracks; they offer PhDs with generous stipends; they establish joint industry chairs allowing principal investigators to move between labs and startups; and they form groups that publish in competitive venues. The region doesn’t need to replicate China’s political economy to emulate its pipeline design. ASEAN can achieve this within a democratic framework by pooling resources and setting standards. The goal should be clear: train a few thousand AI scientists—people who can publish, review, and lead—rather than hundreds of thousands of tool operators. This is not elitist; it is practical in an age where entry-level work is being automated, and where rewards go to original research.
A regional blueprint for an ASEAN AI talent strategy
First, fund depth, not breadth. Create an ASEAN Doctoral Network in AI that offers joint PhDs across leading universities in Singapore, Malaysia, Thailand, Vietnam, Indonesia, and the Philippines. Admit small cohorts based on merit, provide regional stipends tied to local living expenses, and guarantee computing time through a shared cluster. The backbone can be a federated compute alliance, located at research universities and connected through high-speed academic networks. Cities hosting nodes should ensure clean power; states should offer expedited visas for fellows and their families. Policymakers, your role is crucial in ensuring the success of this strategy. Success should not be measured by enrollment but by published papers, released code, and established labs.
Second, improve the research base. Most ASEAN members must increase their R&D spending and research intensity to move beyond applied integration. The gap is apparent. Malaysia allocates just under 1% of GDP for R&D, while some peers spend even less; several countries report researcher densities too low to support robust lab cultures. Establish national five-year minimums for public AI research funding. Tie grants to multi-institutional teams, ensuring at least one public university is involved outside the capital city. Encourage repatriation by offering packages for “returning scientists” that cover relocation, lab startup, and hiring of early-stage talent. A stronger research base will also lessen the need to import expertise at high costs.
Third, align policy with industry needs, but guard against dilution. Malaysia’s national AI office and Indonesia’s AI roadmap indicate intent to coordinate. Use these organizations to redirect funding toward fundamental research. Designate at least a quarter of public AI funding as contestable only with a university principal investigator on the grant. Require that every government-funded model provide a replicable evaluation card, complete with multilingual benchmarks that reflect Southeast Asia’s languages. This is how the region establishes credibility in venues where scientific reputations are built.
Fourth, support the early-career ladder even as copilots become more prevalent. The HBR warning is valid: if companies eliminate junior roles, they may jeopardize their future teams. Governments can encourage better practices without micromanaging hiring by linking R&D tax credits to paid research residencies for new graduates in approved labs. Provide matching funds for companies sponsoring PhD industrial fellowships. Promote open-source contributions from publicly funded work and establish national code registries that enable students to create portable portfolios. These small design choices can significantly impact career development.
Finally, acknowledge where generalist upskilling still fits. Digital literacy and short courses will remain crucial for the broader workforce, providing a buffer against disruption. However, they are not the foundation of competitive advantage in AI. The latest World Bank analysis for East Asia and the Pacific states that new technologies have, so far, supported employment. Yet, it cautions that reforms are needed to sustain job creation and limit inequality. ASEAN should benefit from gains while investing in the scarce asset: scientific talent capable of setting the frontier for regional firms. In a market with broad adoption, the advantage belongs to those who can invent.
We began with a stark statistic: 40% of jobs will be influenced by AI. The region can choose to react with more short courses or respond with a well-thought-out plan. The better option is a scientist-first ASEAN AI talent strategy that funds rigor, builds labs, secures computing power, and creates opportunities for researchers who can publish and innovate. Political landscapes will shift. Costs will drop. Tools will improve. What endures is capacity. If that capacity is rooted in ASEAN’s universities and companies, value will emerge. If it exists elsewhere, ASEAN will forever rely on renting it. The policy path is concrete. It requires leaders to choose a frontier and support it with funding, visas, computing power, and patience. Act now, and within five years, the region will produce its own research, not just consume press releases. Its firms will also hire the people behind the work. In a world filled with inexpensive agents, the only costly asset left is original thinking. Foster that.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Aljazeera. (2024, December 14). South Korea National Assembly votes to impeach President Yoon Suk-yeol. AP News. (2025, April 3–5). Yoon Suk Yeol removed as South Korea’s president over short-lived martial law. British Standards Institution (BSI). (2024, September 18). Embrace AI tools even if some jobs change or are lost. GitHub. (2024–2025). Plans and pricing for GitHub Copilot. IMF—Georgieva, K. (2024, January 14). AI will transform the global economy. Let’s make sure it benefits humanity. ILO. (2024). Asia-Pacific Employment and Social Outlook 2024. McKinsey & Company. (2024, May 30). The state of AI in early 2024: GenAI adoption, use cases, and value. OECD. (2025, June 23). Emerging divides in the transition to artificial intelligence. OECD. (2025, July 8). Unlocking productivity with generative AI: Evidence from experimental studies. Our World in Data (UIS via World Bank). (2025). Number of R&D researchers per million people (1996–2023). Peng, S., et al. (2023). The Impact of AI on Developer Productivity. arXiv:2302.06590. Reuters. (2025, March 10). China’s top universities expand enrolment to beef up capabilities in AI. Reuters. (2025, July 22). Indonesia targets foreign investment with new AI roadmap, official says. Stanford HAI. (2024). The 2024 AI Index Report. Tom’s Hardware. (2025, September). China subsidizes AI computing with “compute vouchers.” UNESCO. (2024, February 19). ASEAN stepping up its green and digital transition. World Bank. (2025, June 17/July 1). Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific; Press release on technology and jobs in EAP. World Bank/ASEAN (SEAMEO-RIHED). (2022). The State of Higher Education in Southeast Asia.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
South America faces a $7.3T loss—about a 4% GDP drag—from unhealthy ageing
Adopt longevity economics: prevention, midlife reskilling, and age-friendly work from 50–75
Tie funding to health and re-employment outcomes to turn ageing into a growth dividend
Higher education now behaves like a market good, and price signals matter
Governments should fund fewer, stronger universities and tie money to outcomes
Demographic decline demands ruthless quality control, clear labeling, and real bridges to opportunity
From Smartphone Bans to AI Policy in Schools: A Playbook for Safer, Smarter Classrooms
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Smartphone bans offer a blueprint for AI policy in schools
Use age-tiered access, strict privacy, and teacher oversight
Evaluate results publicly to protect attention, equity, and integrity
Let's begin with a significant figure: forty. By the close of 2024, a staggering 79 education systems, representing approximately 40% of the global education landscape, had enacted laws or policies curbing the use of smartphones in schools. This global trend is a clear indicator of how societies experiment with a technology, observe its impact, and then establish strict regulations for children. This pattern, which has been observed with smartphones, is now a pressing need for large language models. The parallels are undeniable. Both technologies are ubiquitous, designed to captivate users, and have the potential to divert students from learning at a pace that schools struggle to match. The question is not whether we need an AI policy in schools, but whether we can develop it with the same speed and clarity that the rise of smartphones has taught us to adopt.
What smartphone bans teach us about AI policy in schools
The success of smartphone bans over the past two years serves as a reassuring case study for policy. UNESCO tracked a swift shift from scattered rules to national prohibitions or restrictions. By the end of 2024, seventy-nine systems had implemented these policies, up from sixty the previous year. In England, national guidance released in February 2024 supports headteachers who ban the use of phones during the school day. The Netherlands implemented a classroom ban with limited exceptions for learning and accessibility, achieving high compliance quickly. Finland took action in 2025 to limit device use during school hours and empowered teachers to confiscate devices that were disruptive to the learning environment. These actions were not just symbolic; they established a straightforward norm: unless a device clearly supports learning or health, it should stay away.
Figure 1: Smartphone rules scaled fast across systems. AI policy in schools can move just as quickly when guidance is simple and enforceable.
The evidence on distraction backs this conclusion. In PISA 2022, about two-thirds of students across the OECD reported being distracted by their own devices or by others' devices during math lessons, and those who reported distractions scored lower. In U.S. classrooms, 72% of high school teachers view phone distraction as a significant issue. In comparison, 97% of 11- to 17-year-olds with smartphones admit to using them during school hours. One year after Australia's nationwide restrictions, a New South Wales survey of nearly 1,000 principals found that 87% noticed a decrease in distractions and 81% reported improved learning outcomes. None of these figures proves causation on its own. Still, combined, they indicate a clear signal: less phone access leads to calmer classrooms and more focused learning.
There is also a warning. Some systems chose a different approach. Estonia has invested in managed device use and AI literacy, rather than imposing bans, believing the goal is to improve teaching practices, not just to restrict tools. UNESCO warns that the evidence base still lags behind product cycles; solid causal studies are rare, and technology often evolves faster than evaluations can keep up. Good policy learns from this. It should set clear, simple boundaries while leaving space for controlled, educational use. That balance serves as a model for AI policy in schools, and this policy must be transparent, keeping all stakeholders informed and confident.
Designing AI policy in schools: age, access, and accountability
The second lesson is about design. The smartphone rules that worked were straightforward but not absolute. They allowed exceptions for teaching and for students with health or accessibility needs. AI policy in schools should follow this pattern, with a stronger focus on privacy. The risks associated with AI go beyond distraction. They also include data collection, artificial relationships, and mistakes that can impact grades and records. Adoption of AI is already widespread. In 2024, 70% of U.S. teens reported using generative AI, with approximately 40% using it for schoolwork. Teacher opinions are mixed: in a 2024 Pew survey, a quarter of U.S. K-12 teachers stated that AI tools do more harm than good, while about a third viewed the benefits and drawbacks as evenly balanced. These findings suggest the need for clear guidelines, not blanket bans.
A practical blueprint is straightforward. For students under 13, the use of school-managed tools that do not retain chat histories and block conversational 'companions' should be the norm. From ages 13 to 15, use should be permitted only through school accounts with audit logs, age verification, and content filters; open consumer bots should not be allowed on school networks. From the age of 16 onward, students can use approved tools for specific tasks, subject to teacher oversight and clear attribution rules. Assessment should align with this approach. In-class writing and essential tasks should be 'AI-off' by default unless the teacher specifies a limited use case and documents it. Homework can include 'AI-on' tasks, but these must be cited with prompts and outputs. The aim is not to trap students, but to maintain high integrity, steady attention, and visible learning.
Procurement makes the plan enforceable. Contracts should require vendors to turn off data retention by default for minors, conduct age checks that do not collect sensitive personal information, provide administrative controls that block "companion" chat modes and jailbreak plug-ins, and share basic model cards that explain data practices and safety testing procedures. Districts should prefer tools that generate teacher-readable logs across classes. These expectations align with UNESCO's global guidance for human-centered AI in education and with evolving national guidance that emphasizes the importance of teacher judgment over automation.
Evidence we have—and what we still need—on AI policy in schools
We should be honest about the evidence. For phones, the connection between distraction and lower performance is strong across systems; however, the link between bans and test scores remains under investigation. Some notable improvements, such as those in New South Wales, come from principal surveys rather than randomized trials. With AI, the knowledge gap is wider. Early data indicate increased usage by teachers and students, along with a rapid expansion of training. In fall 2024, the percentage of U.S. districts reporting teacher training on AI rose to 48%, up from 23% the previous year. At the same time, teen use is spreading beyond homework. In 2025 research, 72% of teens reported using AI "companions," and over half used them at least a few times a month. This trend introduces a new risk for schools: AI tools that mimic friendships or therapy. AI policy in schools should draw a clear line here, and ongoing evaluation is crucial for its success.
Figure 2: Capacity is catching up but not there yet—teacher AI training doubled in a year, still below half of districts.
Method notes are essential. PISA 2022 surveyed roughly 690,000 15-year-olds across 81 systems; the 59–65% distraction figures represent OECD averages, not universal classroom rates. The Common Sense figures are U.S. surveys with national samples collected in 2024, with follow-ups in 2025 on AI trust and companion use. RAND statistics come from weighted panels of U.S. districts. The U.K. AI documents provide policy guidance, not evaluations, and the Dutch and Finnish measures are national rules that are just a year or so into implementation. This evidence should be interpreted carefully; it is valuable and credible, but still in the process of development.
This leads to a practical rule: every district AI deployment should include an evaluation plan. Set clear outcomes in advance. Monitor workload relief for teachers, changes in plagiarism referrals, and progress for low-income students or those with disabilities. Share the results on a public schedule, using clear and concise language. Smartphones taught us that policies work best when the public sees consistent, local evidence of their benefits. AI policy in schools will gain trust in the same way. If a tool saves teachers an hour a week without increasing integrity incidents, keep it. If it distracts students or floods classrooms with off-task prompts, deactivate it and provide an explanation for why.
A principled path forward for AI policy in schools
What does a principle-based AI policy in schools look like in practice? Start with transparency. Every school should publish a brief AI use statement for staff, students, and parents. This statement should list approved tools, clarify what data they collect and keep, and identify who is responsible. Move to accountability. Schools should maintain audit logs for AI accounts and conduct regular spot checks to ensure the integrity of assessments. Include human oversight in the design. Teachers should decide when AI is used and remain accountable for feedback and grading. Promote equity by design. Provide alternatives for students with limited access at home. Ensure tools are compatible with assistive technology. Teach AI literacy as part of media literacy, so that students can critically evaluate AI-generated outputs, rather than simply consuming them.
The policy should extend beyond the classroom. Procurement should establish privacy and safety standards in line with child-rights laws. England's official guidance clarifies what teachers can do and where professional judgment is necessary. UNESCO promotes human-centered AI, emphasizing the importance of strong data protection, capacity building, and transparent governance. Schools should choose tools that turn off data retention for minors, offer meaningful age verification, and enable administrators to block romantic or therapeutic "companion" modes, which many teens now report trying. This last restriction should be non-negotiable for primary and lower-secondary settings. The lesson from phones is clear: if a feature is designed to capture attention or influence emotions, it should not be present during the school day unless a teacher explicitly approves it for a limited purpose.
The number that opened this piece—40% of systems restricting phones—reflects a social choice. We experimented with a tool in schools and found that without a clear set of rules, it disrupted the day. This lesson is relevant now. Generative AI is emerging faster than smartphones did and will infiltrate every subject and routine. A confusing message will result in inconsistent rules and new inequalities. A simple, firm AI policy in schools can counteract this. It can safeguard attention, minimize integrity risks, and still enable students to learn how to use AI effectively. The path is clear: establish strict rules for age and access, ensure strong logs, prioritize transparent procurement, and include built-in evaluations. If policymakers adopt this plan now, the following statistic we discuss won't be about bans. It will reflect how many more hours of genuine learning we can achieve—quietly, week after week, in classrooms that feel focused again.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Australian Government Department of Education – Ministers. (2025, February 15). School behaviour improving after mobile phone ban and vaping reforms. Common Sense Media. (2024, September 17). The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School. Common Sense Media. (2025, February 6). Common Sense makes the case for phone-free classrooms. Department for Education (England). (2024, February 19). Mobile phones in schools: Guidance. Department for Education (England). (2024, August 28). Generative AI in education: User research and technical report. Department for Education (England). (2025, August 12). Generative artificial intelligence (AI) in education. Eurydice. (2025, June 26). Netherlands: A ban on mobile phones in the classroom. Guardian. (2025, April 30). Finland restricts use of mobile phones during school day. Guardian. (2025, May 26). Estonia eschews phone bans in schools and takes leap into AI. OECD. (2024). Students, digital devices and success. OECD. (2025). Managing screen time: How to protect and equip students to navigate digital environments? OECD. (2023). PISA 2022 Results (Volume I): The State of Learning and Equity in Education. RAND Corporation. (2025, April 8). More Districts Are Training Teachers on Artificial Intelligence. UNESCO. (2023/2025). Guidance for generative AI in education and research. UNESCO Global Education Monitoring Report team. (2023; updated 2025, January 24). To ban or not to ban? Smartphones in school only when they clearly support learning.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.