Power of Siberia 2 shifts China’s energy risk from tankers to pipelines
With Alaska LNG, Asia gains buyer leverage and softer price spikes—not an oil crash
Winners will master contract design, sanctions exposure, and long build timelines
Informality and “tax exodus” rise when governments hike taxes in downturns
UK and France show wealth flight
Predictable, narrow rules keep money formal
One key fact should guide every budget meeting this winter.
European debt guilt is deep-rooted and won’t be reversed by information campaigns
Rising interest costs, defence needs, and ageing populations are crowding out core services
Be honest: set real plans and rank priorities
AI Agents in Education Can Double Learning, So Let’s Design for Homes, Not Just Enterprises
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI agents in education boost learning while cutting time
Build home-first workflows for practice, planning, and records
Scale with evidence and guardrails to protect equity and trust
One data point should change our thinking: a recent study shows that students using an AI tutor learned significantly more, and in less time, than those in an active learning class covering the same material. The AI tutor used evidence-based teaching methods and still performed well. This is not just a marketing claim; it is supported by peer-reviewed research. Suppose an AI tutor can save time and improve learning in controlled settings. What happens when these “agentic” systems are used in the family home, where most homework takes place? The effects are real. They point to the potential for families to adopt AI agents in education that assist children each night, coach parents, and allow schools to focus their limited human expertise where it is most needed. In short, AI agents in education are transitioning from novelty to necessity, and the question is how quickly we can adjust to that reality.
We can see the shift in usage data. In 2025, 34% of U.S. adults reported using ChatGPT, roughly double the 2023 figure. Among UK undergraduates, use of generative tools increased from 66% in 2024 to 92% in 2025. Teen adoption is also growing: 26% of U.S. teens reported using ChatGPT for schoolwork in 2024, up from 13% the previous year. None of this guarantees learning improvements, but it shows where attention and effort are being directed. The opportunity now is to turn basic tool use into accountable, agent-led workflows that prioritize families first, schools second, and vendors last. This means creating a dependable “home system” for tutoring, planning, and documentation that works just as well on a Tuesday night as it does on a weekend for test prep.
AI agents in education: from single tools to household systems
An AI agent is more than just a chat window. It is software that can plan tasks, use tools, and work towards a goal. Businesses are already using agents to answer complex questions, manage workflows, and connect different systems. The lesson for schools is clear: what works in business will also work at home. Families need agents that help children study, automatically log progress for teachers, draft emails about accommodations, and schedule therapy sessions without paperwork getting lost. The key points from consulting apply at home: it’s not just about how intelligent the agent is; it’s about redesigning the workflow around what learners and caregivers actually do. Focus on the weekly rhythm of assignments and supports rather than worrying about flashy demonstrations.
A clear example of where this is heading comes from a recent story about a parent who used “vibe coding” tools to build an AI tutoring platform for her dyslexic son. She did not wait for a perfect product. She combined research-based prompts, dashboards, and student intake forms to personalize the agent’s guidance, drawing on hundreds of studies about learning differences. The result was not a toy; it was a home setup that adjusted to the child’s motivation and needs. When a parent can create a functioning tutor, we have entered a new era. For context, specialized reading support in the U.S. averages about $108 per hour. An AI agent that can enhance or partially replace some of those human sessions changes the dynamics of time, cost, and access—all while keeping the human specialist for the more complex parts.
What AI agents can do for families now?
We should identify specific use cases because they relate to real challenges families face. Start with support for reading and writing. Recent evidence shows that AI tutors can deliver greater gains in less time when designed around effective teaching methods. This makes agents ideal for structured, paced, and responsive nightly practice. Add school logistics: an agent can extract deadlines from learning platforms, generate study plans, and remind both children and caregivers. It can summarize teacher feedback into two specific actions per subject. It can maintain a private, ongoing learning record that parents can share at the next meeting. Because these are agents, not static programs, they can access external tools to retrieve school calendars, assemble forms for accommodations, or draft that email you've been putting off.
Figure 1: Students using the AI tutor spent 49 minutes on task versus 60 minutes in class—about 18% less time—while the same study reports significantly larger learning gains for the AI-tutored group.
Families also need help connecting limited, costly expert support with critical moments. Reading intervention tutors are essential, but capacity and cost are issues. With agents enabling focused practice between sessions, children can arrive ready for human instruction. This does not mean total replacement; it means better use of resources. Additionally, broader tool usage suggests families are becoming comfortable asking AI for help with information and planning. Surveys show that many adults rely on AI to search and summarize, and students report routine use. Suppose we can direct that comfort toward a household agent focused on learning goals. In that case, we can reduce back-and-forth communication, minimize wasted time, and make the hours spent with a human expert more effective.
Risks, equity, and the danger of “agent washing”
There is a strong temptation to label every scripted workflow an “agent.” Analysts caution that many agent projects may fail within 2 years because teams pursue trends rather than results. This warning is essential in education, where trust is the most valuable asset. The safeguard against soaring expectations is precise planning and measurable outcomes: time on task, growth on validated assessments, and teacher-observed application. This also means avoiding unsafe autonomy. Household agents should have limited default functions, operate under strict guidelines, and provide logs that parents and teachers can quickly review. The standard must focus on verifiable improvements, not flashy showcases.
Figure 2: Meeting universal schooling by 2030 requires ~44 million teachers; Sub-Saharan Africa (15,049k) and South-Eastern Asia (9,766k) account for over half of the gap—underscoring why AI agents should target routine workload, not replace scarce experts.
We also need strong protections for equity. Global policy organizations warn that AI's potential will vanish if we ignore access, bias, and data handling. This is not just hand-wringing; it is about practical design. AI agents in education must prioritize human-centered guidance, protect student data, and be implemented with teacher training and clear rules. We should account for low-resource settings—for instance, offline options, simple devices, and multilingual prompts. Remember the staffing challenge: the world needs millions more teachers this decade to achieve universal education goals. Agent systems should help by taking on routine tasks and structured practice, not by replacing irreplaceable specialized roles.
A playbook to make AI agents in education work—at home and at school
Start with small, manageable successes. Pick one subject and one grade level. Use an agent to assign tasks, coach, and check practice aligned with the existing curriculum. Treat the agent as a redesign of the workflow, not an add-on tool. The most effective implementations in industry focus on the process rather than the tool itself; schools should do the same. As usage increases, integrate the agent with gradebooks and learning platforms to automatically populate progress logs and reduce administrative burdens. Monitor fidelity: if the agent’s prompts stray from the curriculum, correct them. The goal is to free up teacher time for valuable feedback and meetings while providing families with a reliable nightly routine.
Then scale based on evidence. Use validated measurements and compare agent-supported practice against standard methods. If the results favor the agent, make it official. If they do not, adapt or stop. Build trust within the community by showing not only that students used an AI tool but that they learned more in less time. That is the standard set by recent research on AI tutoring.
Meanwhile, monitor usage trends. Both adult and student use indicates that agents are becoming part of daily life; institutions should meet families where they are by offering approved agent options, clear data policies, and easy-to-follow guides. Finally, align purchasing strategies to avoid “agent washing” by using straightforward criteria for limits on autonomy, logging, and outcomes. This reduces vendor turnover and keeps the focus on learning rather than features.
An AI tutor can lead to much more learning in less time than an active-learning class covering the duplicate content. This single insight serves as a guiding principle for policy. It suggests that the right kind of AI—integrated into a workflow, limited for safety, and measured by outcomes—can help families gain hours back and help schools focus on their valuable human expertise. The consumer trend is moving in the same direction, as adults and students incorporate AI into their everyday tasks. The task ahead is to direct that momentum into accountable agents designed for home use and connected to schools. If accomplished, “AI agents in education” will evolve from a buzzword to a dependable part of every household’s learning resources. The key question is simple: do students learn more, faster, without compromising trust or equity? If the answer is yes, we should scale and start now.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Axios. (2025, September 18). AI can support 80% of corporate affairs tasks, new BCG report says. British Dyslexia Association. (2025). Dyslexia overview. HEPI & Kortext. (2025). Student Generative AI Survey 2025. McKinsey & Company. (2025, March 12). The State of AI: Global survey. McKinsey & Company. (2025, September 12). One year of agentic AI: Six lessons from the people doing the work. Pew Research Center. (2025, January 15). About a quarter of U.S. teens have used ChatGPT for schoolwork. Pew Research Center. (2025, June 25). 34% of U.S. adults have used ChatGPT. Reading Guru. (2024). National reading tutoring cost study. Reuters. (2025, June 25). Over 40% of agentic AI projects will be scrapped by 2027, Gartner says. Scientific American. (2025). How one mom used vibe coding to build an AI tutor for her dyslexic son. Stanford/Harvard team via Scientific Reports. (2025). AI tutoring outperforms active learning. Scientific Reports. Teacher Task Force/UNESCO. (2024, October 2). Fact Sheet for World Teachers’ Day 2024. UNESCO. (2023, updated 2025). Guidance for generative AI in education and research. OECD. (2024). The potential impact of AI on equity and inclusion in education. Oracle. (2025). 23 real-world AI agent use cases.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
War slashes output by destroying labor and capital
Ukraine’s displacement and ruined infrastructure cause long, Europe-wide economic scars
Recovery hinges on safe returns, demining, and rebuilding power and transport to unlock investment
Australia’s diversity is high; classrooms decide cohesion
Data show immigrant students succeed with language support and safe schools
Priorities: rapid language screening, teacher training, and clear cohort tracking
The AI Cognitive Extensions Gap: Why Korea’s Test-Prep Edge Won’t Survive Generative AI
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Korea excels at teen “creative thinking,” but adults lag in adaptive problem solving
Generative AI automates routine tasks, so value shifts to AI cognitive extensions—framing, modeling, and auditing
Reform exams, classroom routines, and admissions to reward those extensions, or the test-prep edge will fade
Fifteen-year-olds in Korea excel in “creative thinking.” In 2022, 46% reached the top levels on the OECD test, well above the OECD average of 27%. However, Korean adults struggle with flexible, real-world problem-solving. On the OECD Survey of Adult Skills, 37% score at or below Level 1 in adaptive problem solving, and only 1% reach the top level. This gap is significant. School-age achievements in "little-c" creativity do not translate into adult skills in defining problems, synthesizing evidence, and designing solutions alongside AI. Generative tools speed up this change. Randomized studies show that AI improves routine writing quality by about 18% and reduces time spent by around 40%. Customer support agents solve 14-15% more queries with the help of AI. The real value lies in the human steps before and after automation—what we term AI cognitive extensions: scoping, modeling, critiquing, and integrating across sources. Systems that depend heavily on algorithmic training will struggle, while those that teach AI cognitive extensions will thrive.
AI Cognitive Extensions: Paving the Way for a New Era in Education
Generative AI now takes on much of the execution work that schools have valued: standard writing, template coding, routine data transformations, and procedural math. In 10 OECD countries, jobs most affected by AI already prioritize management, collaboration, creativity, and communication. Seventy-two percent of high-exposure job postings require management skills, and demand for social and language skills has increased by 7-8 percentage points over the past decade. As tools handle more tasks, employers appreciate the ability to set up the task, break down challenges, evaluate outputs, and communicate decisions across teams. These are AI cognitive extensions. They are not mere soft skills; they represent crucial judgment skills in a world where basic tasks are automated.
Korea has been moving in this direction on paper. The 2022 national curriculum revision, a comprehensive update to the educational framework, aims to cultivate “inclusive and creative individuals with essential skills for the future,” with a focus on uncertainty and student agency. However, policy intent faces cultural challenges. Private after-school tutoring remains nearly universal and is optimized for high-stakes exams. In 2023-2024, participation was around 78-80%, and spending reached record levels—about ₩27-29 trillion—despite a declining student population. The education system still rewards algorithmic speed under exam pressure. This system cannot deliver AI cognitive extensions at scale.
Figure 1: Management and process skills dominate AI-exposed roles—the very AI cognitive extensions schools must teach and assess.
What the Data Says About Korea’s Strengths—and Weak Links
It is tempting to say the system is already succeeding based on high PISA scores in creative thinking. But a closer look reveals the truth. PISA measures' little-c' creativity in familiar contexts for 15-year-olds, assessing the ability to generate and refine ideas within set tasks. It does not evaluate whether graduates can define tasks themselves, combine evidence from different fields, or check an AI's reasoning under uncertainty. The picture changes for adult skills. Korea’s average literacy, numeracy, and adaptive problem-solving in the PIAAC survey fall below OECD averages, with a notably thick lower range in adaptive problem-solving. This discrepancy creates labor-market challenges, such as a mismatch between the skills students possess and the skills employers need, which schools pass to universities and employers. It also highlights the need for AI cognitive extensions.
Figure 2: Korea’s top-performer rate is ~2× the OECD average—strong “little-c” creativity at age 15, not yet proof of adult problem-framing power.
AI adoption further emphasizes the importance of framing over mere execution. Randomized studies show that generative tools improve output for routine writing and standard customer support, especially for less-experienced workers on templated tasks. Coding experiments with AI pair programmers exhibit similar time savings. Automation reduces the value of algorithmic execution, the very area that cram schools optimize. The emphasis shifts to individuals who can write effective prompts, define evaluation criteria, blend domain knowledge with statistical insight, and justify decisions in uncertain situations. Essentially, AI handles more of the “show your work” aspect, and humans must decide which work to present initially. This is the gap Korea needs to bridge.
Designing an AI Cognitive Extensions Curriculum
The solution lies in the curriculum, not superficial tweaks. Start by teaching students the tasks that AI cannot easily manage: defining problems, selecting evaluation measures, and justifying trade-offs. Make these actions clear. In math and data science, the shift from just solving problems to discussing “model choice and critique”: why choose a logistic link over a hinge loss; the significance of priors; where confounding variables might hide; and how we would identify model drift. In writing, revise assessments from finished essays into decision memos that include evidence maps and counterarguments. In programming, replace bland problem sets with “spec to system” tasks where students must gather requirements, create basic tests, and then use AI to draft and refine while documenting risks. These practices introduce AI cognitive extensions as a series of habits that can be assessed for quality and reproducibility.
Korea can move quickly because its education system already rigorously tracks performance. Substitute some timed, single-answer tests with short, frequent tasks where students submit: a problem frame, an AI-assisted plan, an audit of their model or tool choice, and reflections on their failures. Rubrics should evaluate the clarity of assumptions, the choice of evaluation metrics, and the student’s ability to highlight second-order effects. This approach maintains meritocratic standards while rewarding advanced thinking. National curriculum language that emphasizes adaptable skills allows for such shifts; the challenge is effectively applying those changes in schools still influenced by hagwon culture. Adjusting classroom incentives to promote AI cognitive extensions is essential for turning policy into practice.
The institutional evidence is telling. A selective European program that imported Western, problem-based AI education into Asia found that a large majority of its Asian students struggled to apply theory to open-ended applications and synthesis. This prompted a redesign of admissions and teaching methods. The issue was not a lack of mathematical ability; instead, it highlighted the missing connection between abstract concepts and real-world applications in uncertain situations—the very essence of AI cognitive extensions. While one case can’t replace national data, it shows how quickly high-performing test takers can stumble when tasks change from executing algorithms to creating them. We should view this as a cautionary tale and a guide for improvement.
Accountability That Rewards AI Cognitive Extensions
Assessment must evolve if curricula are going to change. The national exam system offers a straightforward opportunity for reform. Korea’s test reforms have primarily focused on removing “killer questions.” While this may reduce the competition in private tutoring, it does not assess what truly matters. Instead, introduce scored components that cannot be crammed for: scenario briefs with uncertain data, brief oral defenses of plans, and audit notes for an AI-assisted workflow that flag hallucinations, biases, and alignment risks. Assess these using double-masked rubrics and random sampling to ensure fairness and scalability. If the exam rewards AI cognitive extensions, the system will teach them.
The second area for change is teacher practices. Recent studies of curriculum decentralization show that just giving teachers more freedom does not ensure improvements; they need practical tools and shared routines. Provide curated task banks aligned with the national curriculum’s goals, along with examples of problem framing, evaluation design, and AI auditing. Pair this with quick-cycle professional learning that reflects student tasks; teachers should practice writing prompts, setting acceptance criteria, and conducting red-team reviews. When teachers have heavier workloads, AI should assist with routine paperwork to free up time for coaching higher-order skills. This approach makes AI cognitive extensions a real practice rather than just a slogan.
Lastly, align signals in higher education. Universities and employers should favor admissions and hiring practices that look for portfolios showcasing framed problems, annotated code and data notes, and decision documents with clear evaluation criteria. Analysis of Lightcast data across OECD countries shows growing demand for creativity, collaboration, and management in AI-related roles. Suppose universities publish criteria that require these materials. In that case, secondary schools will start teaching them, and this time it will be effective. Building a strong connection between signals and skills is how Korea can maintain its excellence and update its competitive edge.
The headline figures tell different stories: Korea’s teenagers shine in tested creativity, but many adults struggle with adaptive problem-solving in real-world situations. Generative AI sharpens this divide. It automates much of the task's core and shifts value to the beginning and end: the framing at the start and the audit at the end. These edges constitute AI cognitive extensions. They can be taught, assessed fairly, and scaled when exams, classroom activities, and higher education signals support them. Maintain algorithmic fluency; it still has its place. However, do not mistake fast computation under pressure for the ability to define problems, manage uncertainty, and guide tools with sound judgment. The path forward is not to slow down or ban AI in classrooms. Instead, it involves making the human aspects of the work—the parts that guide AI—visible, regular, and graded. If we act now, Korea can turn its early-stage creative strengths into adult capabilities. If we don’t, students who were prepared for yesterday’s challenges will see their advantages fade. The decision—and the opportunity—are ours.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bae, S.-H. (2024). The cause of institutionalized private tutoring in Korea. Asia Pacific Education Review. Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at Work. Quarterly Journal of Economics, 140(2), 889–951. Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work (NBER Working Paper No. 31161). Lightcast & OECD. (2024). Artificial intelligence and the changing demand for skills in the labour market. OECD AI Papers. Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science. OECD. (2023). PISA 2022 results (Volumes I & II): Korea country notes. OECD. (2024). PISA 2022 results (Volume III): Creative thinking—Korea factsheet. OECD. (2024). Survey of Adult Skills (PIAAC) 2023 Country Note—Korea and GPS Profile: Adaptive problem solving. Peng, S., et al. (2023). The impact of AI on developer productivity. arXiv:2302.06590. Seoul/Korean Ministry of Education; Statistics Korea. (2024–2025). Private education expenditures surveys. (News summaries and statistical releases). SIAI (Swiss Institute of Artificial Intelligence). (2025). Why SIAI failed 80% of Asian students: A Cognitive, Not Mathematical, Problem. SIAI Memo. UNESCO. (2023). Guidance for generative AI in education and research.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Schools must plan for digital money resilience as payments go cashless
Keep layered rails—fast digital by default, cash and offline modes for outages
Pilot CBDC/stablecoin use with strict safeguards, contracts, and treasury diversification