Skip to main content

Rethinking AI Energy Demand: Planning for Power in a Bubble-Prone Boom

Rethinking AI Energy Demand: Planning for Power in a Bubble-Prone Boom

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

AI energy demand may surge—but isn’t guaranteed
Nuclear later; near-term: renewables, storage, shifting
Schools should plan for boom or bust with flexible procurement

By 2030, global electricity generation for data centers is expected to exceed 1,000 TWh, about double 2024 levels and nearly 3% of total global electricity generation. This figure is significant. It reflects both the scale of growth in computing and a deeper tension in policy. If we view AI energy demand as fixed and unavoidable, we risk embedding today's hype into tomorrow's power systems. However, this scenario isn't predetermined. Demand will depend on how quickly models are used, business adoption, and profit margins that justify ongoing expansion. If growth slows or if the AI bubble deflates, projections that assume steady increases will be off target. If growth persists, we may struggle to provide enough clean power quickly. Either way, educators, administrators, and policymakers should not rely solely on averages; they must create adaptable strategies to prepare for both increases and decreases.

The United States serves as a stress test. An assessment supported by the U.S. Department of Energy predicts that data centers will grow from approximately 4.4% of U.S. electricity use in 2023 to between 6.7% and 12% by 2028, driven by AI adoption and efficiency gains. BloombergNEF estimates that meeting AI energy demand could require 345 to 815 TWh of power in the U.S. by 2030, translating to a need for 131 to 310 GW of new generation capacity. These estimates vary widely because the future is genuinely uncertain. They depend on factors such as usage rates, inference intensity, cooling technology, and hardware advancements. This uncertainty should inform public investment and campus procurement strategies: plan for peaks while being prepared for plateaus.

Figure 1: Data-intensive work clusters in major metros, while CO₂ hotspots only partly overlap, revealing a planning gap between where AI demand will grow and where clean power sits.

The standard narrative is straightforward: AI chips consume a lot of power, cloud demand is increasing, and grids are under pressure; thus, we must expand clean power, particularly nuclear energy. This chain of reasoning holds some truth but is incomplete. The missing element is the conditional nature of demand. Much of the anticipated load growth relies on business models—such as autonomous systems taking over many white-collar jobs—that haven't yet proven effective at scale. If those assumptions don't hold, the story about AI energy demand shifts from a steep rise to a plateau. Our goal is to create policies for energy and education that apply to both potential outcomes, avoiding the pitfalls of excessive optimism or pessimism.

Productivity Claims vs. Power Reality

Evidence on productivity in specific situations is encouraging. In a significant field study, providing a generative AI assistant to over 5,000 customer support agents increased the number of issues resolved per hour by about 14% on average, with greater gains for less experienced staff. This indicates real improvements at the task level. However, this alone does not prove that automated systems will replace a large share of jobs or maintain the high inference volumes needed to ensure rapid growth in AI energy demand through 2030. Overall productivity is essential because it funds capital expenditures and keeps servers operational. In 2024, U.S. non-farm business labor productivity grew by about 2.3%. This is good news, but it isn't just about AI, nor is it enough to provide a substantial change to settle the energy debate.

Macroeconomic signals are mixed. Analysts have highlighted a notable gap between the capital expenditures for AI infrastructure and current AI revenues. Derek Thompson and others summarize estimates suggesting spending on AI-related data centers could reach $300 to $400 billion by 2025, compared to much lower current revenues—typical indicators of a speculative boom. Even some industry leaders acknowledge bubble-like traits while emphasizing the necessity for a multi-year runway. If revenue growth falls short, usage will shift from exponential to selective, leading to a decline in AI energy demand. This possibility should be considered in grid and campus planning—not as a prediction, but as a scenario with specific procurement and contracting consequences.

The baseline for data centers is also changing rapidly. The International Energy Agency (IEA) projects that global electricity consumption by data centers could double to about 945 TWh by 2030, growing roughly 15% per year, which is over four times the growth rate of total electricity demand. U.S. power agencies anticipate a break from a decade of flat demand, driven in part by AI energy needs. However, the IEA also highlights significant differences in the sources of new supply: nearly half of the additional demand through 2030 could be met by renewables, with natural gas and coal covering much of the rest, while nuclear energy's contribution will increase later in the decade and beyond. This mix highlights the policy dilemma: should we aim for slower demand growth, faster clean supply, or both?

Nuclear's Role: Urgent, but Not Instant

Nuclear power has clear advantages for meeting AI energy demand: high capacity factors, stable output, a small land footprint, and life-cycle emissions comparable to those of wind power. In 2024, global nuclear generation reached a record 2,667 TWh, with average capacity factors around 82%. This reliability is valuable to data center operators. These attributes are crucial for long-term planning. The challenge lies in time. Recent data indicate that median construction times range from approximately 5 to 7 years in China and Pakistan, to 8 to over 17 years in parts of Europe and the Gulf, with some western projects taking well beyond a decade. Small modular reactors, often touted as short-term solutions, have faced cancellations and delays; the flagship project in Idaho was terminated in 2023. In other words, while nuclear is a strategic asset, it alone cannot handle the immediate surge in AI energy demand.

This timing issue is significant because the period from 2025 to 2030 is likely to be critical. Even optimistic timelines for small modular reactors suggest that the first commercial units won't be ready until around 2030; many large reactors now under construction will not connect to the grid before the early to mid-2030s. Meanwhile, wind and solar energy sources are being added at unprecedented rates—around 510 GW of new renewable capacity in 2023 alone—but their variable generation and connection backlogs limit how much of the AI surge they can support without storage and transmission improvements. The practical takeaway is a three-part plan: push for nuclear expansion in the 2030s, accelerate renewable energy and storage investments now, and implement demand-side strategies to manage AI energy demand in the meantime.

Even in a nuclear-forward scenario, careful procurement matters. Deals for data centers near campuses should combine long-term power purchase agreements with enforceable clean-energy guarantees, not just claims about the grid mix. Where nuclear is feasible, offtake contracts can support financing; where it is not, long-duration storage and reliable low-carbon options—including geothermal and hydro updates—should form the basis of the energy strategy. The goal of policy is not to choose winners, but to secure reliable, low-carbon megawatt-hours that align with the hourly demands of AI energy.

What Schools and Systems Should Do Now

Education leaders are in a unique position: they are significant electricity consumers, early adopters of AI, and trusted community conveners. They should respond to AI energy demand with three specific actions. First, treat demand as something that can be shaped. Model the load under two scenarios: ongoing AI-driven growth and a moderated rate if automated systems don't deliver. Align procurement with both scenarios—short-term contracts with flexible volumes for the next three years, and longer-term clean power agreements that expand if usage proves sustainable. Include provisions to reduce usage during peak times, and implement price signals that encourage non-essential tasks to be done during off-peak hours. This same approach should apply to campus research groups: establish scheduling and queuing rules that prioritize daytime solar energy whenever possible and enhance heat recovery from servers for buildings.

Second, connect education to energy use. Update curricula to specify AI energy demand within computer science, data science, and EdTech programs. Teach energy-efficient model design, including quantization, distillation, and retrieval, to reduce inference costs. Introduce foundational grid knowledge—such as capacity factors and marginal emissions—so graduates can develop AI that acknowledges physical limits. Pair this learning with real-world procurement projects: allow students to analyze power purchase agreement terms, claims of additionality, and hourly matching. Future administrators will need this knowledge as much as they require skills in privacy or security.

Third, plan for both growth and decline. If usage skyrockets as anticipated, the U.S. could need between 11% and 26% more electricity by 2030 to support AI computing; campuses should prepare to adjust workloads, invest in storage, and strengthen distribution networks. If the bubble bursts, renegotiate the minimum terms of offtake agreements, direct surplus clean energy toward electrifying fleets and buildings, and retire the least efficient computing resources early. Taking these approaches safeguards budgets and supports climate goals. It is crucial to reject the notion that the only solution is to produce more energy. Good policy must also address demand.

Anticipating the Strongest Critiques

One critique suggests that efficiency will outpace demand. New designs and improved power utilization effectiveness (PUE) could limit AI energy needs. This is a plausible argument. However, history warns of the Jevons paradox: lower costs per token can lead to increased consumption. Even the most positive efficiency projections indicate that overall demand will still rise in the IEA's base case, as user growth outweighs savings from improved efficiency. Others argue that AI could offset its energy costs through enhanced productivity. Studies at the task level show gains, particularly among less experienced workers, and U.S. productivity has improved. Yet, these advancements are not substantial enough at this point to guarantee the revenue streams necessary to support permanent, exponential growth. It is wiser to plan for different possible outcomes, rather than deny potential issues.

A second critique argues that nuclear energy can meet the rising demand if we "choose to build." We should indeed make that choice—quickly and safely. However, current timelines present challenges. Recent global construction times vary greatly; early adopters of small modular reactors have faced setbacks, and large projects in the West continue to have extensive delays. While nuclear is necessary in the mix for the 2030s, it is not a quick fix for AI energy demands in 2026. This is why procurement strategies must combine short-term renewable and storage solutions with long-term firm sources. Regulators must also expedite transmission and interconnection processes; without proper infrastructure, new clean energy resources cannot reach new demand.

A third critique argues that fears of an "AI bubble" are exaggerated. This may be true. However, even industry insiders recognize bubble-like characteristics in the market while suggesting that any downturn may still be years away. For public institutions, the appropriate response is not to stake everything on either scenario. Instead, they should ensure flexibility: secure adaptable contracts, staged developments, colocated storage, and systems that maximize value from every kilowatt-hour. This strategy works well in both growth and downturn periods.

Design for Uncertainty, Not For Averages

The key numbers are concerning. Global AI energy demand for data centers could surpass 1,000 TWh by 2030. In the U.S., AI could account for 6.7% to 12% of total electricity by 2028, and meeting growth needs by 2030 could require adding 131 to 310 GW of new capacity. These estimates validate the need for urgent action without leading to despair. They also remind us to stay humble about what the future holds: if automated systems fail to generate substantial productivity consistently, usage will decline, and long-term energy investments based on steady growth will fall short. Conversely, if AI continues to expand, every clean megawatt we can create will be essential, with nuclear energy playing a more significant role later in the decade and into the 2030s. The unifying theme is design. Institutions should approach AI energy demand as a variable they can control—shaped through contracts, software, education, and operations. This involves securing flexible offtake today, investing in robust low-carbon supply tomorrow, and maintaining a constant focus on efficiency. It also means graduating students who understand how to work with the existing grid and the new grid we need to build. The call to action is clear: prepare for growth, protect against downturns, and ensure that every incremental terawatt-hour is cleaner than the last.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

BloombergNEF. (2025, October 7). AI is a game changer for power demand.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work (NBER Working Paper No. 31161).
Ember. (2025, April 8). Global Electricity Review 2025.
IEA. (2024–2025). Energy and AI: Energy demand from AI; Energy supply for AI.
IEA. (2024). Electricity 2024 – Executive Summary.
IEA. (2023). Renewables 2023 – Executive Summary.
LBNL/DOE. (2024, December 20). Increase in U.S. electricity demand from data centers. U.S. DOE summary.
Reuters. (2025, February 11). U.S. power use to reach record highs in 2025 and 2026, EIA says.
U.S. BLS. (2025, February 12). Productivity up 2.3 percent in 2024.
World Nuclear Association. (2024–2025). World Nuclear Performance Report; capacity factors and 2024 generation record.
World Nuclear Industry Status Report. (2024). Construction times 2014–2023 (Table).
Thompson, D. (2025, October 2). This Is How the AI Bubble Will Pop.
The Verge. (2025, Aug.). Sam Altman says "yes," AI is in a bubble.
Business Insider. (2025, Oct.). Former Intel CEO Pat Gelsinger says AI is a bubble that won't pop for several years.

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

When Families Become a Nation’s Shock Absorber: Intergenerational Risk-Sharing in an Aging World

Families insure children’s income shocks—cash for short hits, saving for long ones
In ageing, low-growth countries, this scales nationally: Japan’s seniors work longer to steady households
Policy fix: public “reinsurance” via income-linked tuition, midlife upskilling, and flexible senior roles in education

From Model Risk to Market Design: Why AI Financial Stability Needs Systemic Guardrails

From Model Risk to Market Design: Why AI Financial Stability Needs Systemic Guardrails

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI adoption is concentrated on a few cloud and model providers, creating systemic fragility
Correlated behavior and shared updates can amplify shocks across markets
Regulators should stress-test correlation, mandate redundant rails, and map dependencies to safeguard AI financial stability

Three vendors now support most of finance’s machine learning systems. By the second quarter of 2025, AWS, Microsoft Azure, and Google Cloud together held about two-thirds of the global cloud infrastructure market. Banks, brokers, insurers, and market utilities increasingly use AI on the same infrastructure, with identical toolchains, often trained on similar data. This is not just about operational ease; it is a market design choice with broader implications. When systems learn from the same patterns and operate on shared frameworks, errors and feedback loops can escalate quickly. Central banks are starting to take notice. The Bank of England has highlighted the need to stress-test AI. The Financial Stability Board points to a lack of visibility into usage and new vulnerabilities. The BIS urges supervisors to balance rapid adoption with strong governance. The challenge for policymakers is straightforward: can we move from fixing models individually to implementing system-level safeguards that support AI financial stability before the next crisis hits?

AI Financial Stability: Reframing the Risk Map

The old perspective views “AI risk” as a series of technical glitches: bias in scorecards, chatbots that misinterpret data, or models that stray from their intended use. The new perspective takes a broader view. Suppose the financial system shifts price discovery, risk transfer, and customer flows onto uniform, centralized AI pipelines. In that case, rare events will change as well. System outages, update failures, or parameter changes at one provider can affect several firms simultaneously. Similar models can respond to the same signals simultaneously, leading to herding. The IMF warns that this can amplify price changes and push leverage and margining frameworks beyond the levels anticipated by older standards. In essence, AI financial stability is not just about better models. It's about shared infrastructures, correlated behaviors, and the speed of collective responses, intended or unintended.

This new perspective is essential now because AI adoption is widespread. Supervisors report significant AI use in credit, surveillance, fraud detection, and trading. ESMA has informed EU firms that boards are accountable for AI-driven decisions, even when tools are sourced from third parties. The FSB’s 2024 assessment highlights monitoring gaps and vendor concentration as structural issues. Meanwhile, the BIS outlines how authorities are adopting AI for policy tasks. A system that runs on AI while being governed by AI offers opportunities for better coordination but also poses risks of correlated failures when inputs are unreliable or stressed. This is a question of stability, not just compliance.

Figure 1: The “compute rail” for finance is already concentrated: the top three clouds host nearly two-thirds of global infrastructure, setting the stage for correlated failure modes.

AI Financial Stability: Evidence on Pressure Points (2023–2025)

First, consider concentration. By mid-2025, the top three cloud providers controlled about two-thirds of the global infrastructure market. While this doesn’t specifically reflect finance, it highlights systemic exposure, as many regulated firms are moving their analytics and data operations to these providers—both the FSB and Bank of England flag third-party and vendor risks in AI. Additionally, IMF discussions prioritize herding and concentration as key concerns. When you combine these factors, the takeaway is straightforward: upstream concentration and downstream uniformity increase risk during stress events. Method note: We use the share of global infrastructure as a proxy for potential concentration in financial AI hosting, following FSB guidance on proxy indicators when direct metrics are unavailable.

Second, focus on speed and amplification. The IMF has warned that AI can accelerate and amplify price movements. Bank of England officials have suggested including AI in stress tests, as widespread use in risk and trading could increase volatility during stressful conditions. The FSB adds that limited data on AI adoption hampers oversight. These concerns are not just theoretical; they relate to familiar situations: one-sided order flow from similarly trained agents, sudden deleveraging when risk limits are hit simultaneously, and operational correlations when many firms patch to the same model update. Method note: these insights come from official speeches and reports from 2024–2025 and align with established surveillance tools. They don’t assume unnoticed failures; they interpret current policy signals as early warnings that need to be incorporated into macroprudential design.

AI Financial Stability: What to Do Now—Design, Not Band-Aids

The first policy shift should move supervision from model risk to market structure. Today’s guidelines focus on validation, documentation, and local explainability. Those are important, but macroprudential policy must also address three design questions: How many independent compute infrastructures support key market functions? How varied are the training data and objectives across major dealers and funds? Can critical services function properly during a failure of a provider or model? These answers will guide the use of known tools: sector-wide scenario analysis that includes correlated AI shocks, system-level concentration limits when feasible, and redundancy requirements for essential infrastructure. The Bank of England’s interest in stress testing AI is a start. The goal is to scale this into a shared, international standard that aligns with FSB monitoring. This will make AI financial stability part of a cohesive macroprudential program, promising a more secure and stable financial future.

Figure 2: AI spend more than doubles in four years; scale alone can turn local model errors into system features without guardrails.

The second move is to close the data gap without over-collecting proprietary information. Authorities should establish a minimum observatory for AI use: identifying hosting locations by importance, model categories tied to critical functions, change-management schedules, and dependency maps for key datasets and vendors. The FSB has suggested monitoring indicators; these could serve as a standard regulatory return. ESMA has clarified board accountability under MiFID. Following that logic, firms should be able to confirm their AI dependency maps just as they do for cyber and cloud risks. The BIS’s work cataloging supervisory AI can help measure improvements in regulatory technology and identify where shared models might create supervisory blind spots. We don’t need every parameter—we need a clear map —and closing the data gap is a crucial step toward achieving it.

AI Financial Stability: Anticipating the Pushback

One critique suggests that the benefits outweigh the risks. AI is already reducing fraud, speeding up compliance, and enhancing service. Surveys indicate that banks expect significant profit increases in the coming years. While that’s true and positive, benefits don’t eliminate tail risk; they change it. Fraud detection systems that rely on shared models can create single points of failure in payments. Faster client onboarding across the sector can synchronize risk appetites near the peak. In trading, even a slight alignment of objectives can lead to larger price movements more quickly. As Yellen pointed out, scenario analysis must account for opacity and vendor concentration as critical aspects. The goal of a stability regime is not to hinder productivity; it is to protect it.

Another critique argues that existing regulations already cover this area: model risk, outsourcing, and cyber concerns. To some extent, that’s correct. But fragmentation is the issue. Outsourcing rules do not require industry-level redundancy for AI compute used in critical market functions. Model risk regulations do not confront herding among multiple firms, even if each model is sound individually. Cyber frameworks focus on malicious threats, not benign failures that follow a shared update. Policy can adapt quickly. ESMA’s 2024 statement assigns ultimate accountability to the board. The Bank of England is advocating for stress tests. The FSB has defined indicators for adoption. We should integrate these into a macroprudential standard that addresses current market dynamics—one in which AI financial stability hinges on effective correlation management.

The statistic that is introduced in this essay is not just an interesting fact. It forms the core of today’s risk landscape. When a few providers host most of the industry’s AI, when many firms adjust similar models using overlapping data, and when policy itself operates on machine systems, fragility becomes a systemic issue. We do not need to fear AI to address this; we need to view AI financial stability as an essential design task. The steps are straightforward: stress-test correlations, not just capital; require redundant systems where concentration exists; map dependencies as a standard return; and ensure board accountability aligns with industry outcomes. Benefits will expand, not contract, when the market trusts these systems. The next crisis will not wait for perfect data. It will test whether our safeguards match the structure we have chosen. If we act now, we can secure AI’s advantages and mitigate its risks. Delaying may lead to the subsequent surge in speed and herding, catching us off guard even more quickly.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bank of England. (2025, April 9). Financial Stability in Focus: Artificial intelligence in the financial system.
Breeden, S. (2024). Banks’ use of AI could be included in stress tests [Interview/coverage]. Financial Times.
BIS. (2025, October 8). Artificial intelligence and central banks: monetary and financial stability.
BIS FSI. (2025, June 26). Financial stability implications of artificial intelligence — Executive summary.
ESMA. (2024, May 30). EU watchdog says banks must take full responsibility when using AI [Guidance coverage]. Reuters.
FSB. (2024, November 14). The financial stability implications of artificial intelligence (Report and PDF).
FSB. (2025, October 10). Monitoring adoption of AI and related vulnerabilities (Indicators paper).
IMF. (2024, September 6). Artificial intelligence and its impact on financial markets and financial stability (Remarks).
IMF. (2024, Oct.). GFSR, Chapter 3: Advances in AI — Implications for capital markets (Outreach findings on concentration risk).
Statista. (2025, August 21). Worldwide market share of leading cloud infrastructure providers, Q2 2025.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Southeast Asia AI Productivity: Why the Payoff Rises or Falls with Learning, Not Just Spend

Southeast Asia AI Productivity: Why the Payoff Rises or Falls with Learning, Not Just Spend

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

AI investment pays off in Southeast Asia only when paired with real workforce learning
Training, workflow redesign, and governance turn tools into measurable productivity and wage gains
Shift budgets from hardware to people so diffusion is broad, fast, and inclusive

The most revealing number in today’s AI conversation isn’t a billion-dollar investment or a flashy compute benchmark. It’s the fourfold difference in productivity growth between sectors most exposed to AI and those least exposed. This is coupled with a 56% wage premium for workers in AI-skilled roles. These indicators are based on PwC’s analysis of nearly a billion job ads and firm outcomes. They highlight a clear point: where AI is used effectively, output per worker increases quickly, and wages rise. Where it isn’t applied well, the opposite happens. In Southeast Asia, the investment narrative is significant, with tens of billions of dollars poured into data centers and cloud regions, leading the region's digital economy back to double-digit growth. However, the returns depend on people, processes, and time. The main argument of this essay is straightforward: Southeast Asia's AI productivity will depend on how quickly schools, companies, and government bodies can transform AI from mere tools into daily habits.

Southeast Asia AI productivity starts with human learning

We know AI can boost output within firms. An extensive study of customer support agents found that access to a generative AI assistant increased average productivity by about 14%, with the most significant gains—about a third—among the least experienced workers. This scenario is common in many Southeast Asian service jobs, which often have high turnover, steep learning curves, and significant skill gaps. Studies of software illustrate the same point. In a controlled task, developers using GitHub Copilot completed coding nearly 56% faster than the control group. This efficiency increase adds up over a year of sprints and fixes. The mechanism isn’t magical. AI captures practical know-how from skilled workers and offers it to novices during their work, shortening the learning curve and spreading best practices. In short, Southeast Asia's AI productivity improves when learning speeds up.

The challenge is that learning never happens for free. Surveys show many employees save significant time using AI, but only a small percentage receive formal training on how to apply it. A global workplace poll found that users save about an hour per day on average, but only 1 in 4 had received training. Another study from the St. Louis Fed measured time savings of 5.4% of weekly hours for users—about 2 hours per week for a standard 40-hour schedule. Training older or less tech-savvy workers requires intentional effort. In a UK pilot, simple permissions and a few hours of coaching significantly increased AI usage among late-career women in lower-income jobs—a lesson with clear implications for Southeast Asia's diverse labor markets. Unless leaders allocate time and money for this human ramp-up, AI tools will remain unused, and productivity will stagnate. Improving Southeast Asia's AI productivity is primarily a management challenge rather than a hardware race.

Southeast Asia AI productivity depends on uneven adoption

Adoption across the region is occurring but remains uneven. According to Google, Temasek, and Bain, over $30 billion was invested in AI infrastructure in Southeast Asia during the first half of 2024, while the broader internet economy returned to double-digit growth. Governments and tech giants are acting quickly: Thailand approved $2.7 billion in funding for data center and cloud investments, while Microsoft committed $1.7 billion to Indonesia, aiming to train 840,000 Indonesians in AI skills as part of a regional goal of 2.5 million. Yet, enterprise readiness lags. An IDC/SAS survey revealed that only 23% of Southeast Asian organizations are at a “transformative” stage of AI use. A separate Deloitte survey shows that executives identify the most significant barriers as talent shortages, risk, and a shaky understanding of the technology. In simple terms, capital is arriving faster than the necessary skills.

Figure 2: Sectors most exposed to AI grew productivity ~5× faster than less-exposed sectors; labor markets also price a 56% wage premium for AI skills—evidence that capability, not hype, drives returns.

There are encouraging signs. Agentic AI—software that connects tasks—might expand quickly as companies turn pilot programs into standard operations. Multiple regional surveys indicate that about two in five firms already use such agents, with most others planning rollouts within the following year. However, relying on averages obscures significant national differences in skills, infrastructure, and digital trust. The World Bank warns that when adoption depends on task structures and complementary skills, the benefits will flow to workers and firms that can adapt to the technology, leaving others behind. The OECD reaches a similar conclusion: AI can boost productivity, but long-term benefits rely on widespread usage, regulations, and inclusivity. This leads to a clear policy implication: to enhance Southeast Asia's AI productivity, leaders must close the “last-mile” gap between large capital expenditures and frontline workers.

Financing Southeast Asia AI productivity: from capex to opex

Much of the AI budget in the region focuses on hardware, cloud credits, and vendor proofs of concept. The larger returns lie in the operating budget: training time, workflow redesign, risk management, and change processes. This is where many digital programs fall short. Research shows that 70% of significant transformations exceed their original budget—often because they underestimate the organizational effort involved. The empirical evidence on potential payoff is becoming clearer. PwC’s analysis links AI exposure to faster productivity growth and increased revenue per employee. MIT’s call center experiment, along with the Copilot RCT, provides estimates of the gains firms can expect from well-planned adoption. These figures support a shift in perspective: view training and change as investments with measurable results, not just expenditures to cut.

What could effective operating expenses look like? Start with hours saved. If typical users can save around 5% of their weekly time now—and even more on repetitive knowledge tasks—modest adoption across a 10,000-person company can yield thousands of hours freed weekly. Add small-group coaching, workflow standards, and secure model access, and the time savings can accelerate further. In practice, measurable benefits often appear quickly once tools are integrated: Bain’s Southeast Asia analysis notes that many firms see returns within 12 months. On the public side, targeted skills programs can increase returns on private investments. The ILO’s new initiative to deliver digital skills in ASEAN’s construction sector serves as a valuable model for employer-linked training. Microsoft’s large-scale upskilling efforts in Indonesia aim in the same direction. A practical rule emerges: if we cannot identify scheduled training hours and a redesigned workflow, we should expect Southeast Asia's AI productivity to disappoint, regardless of how much computing power we acquire.

Figure 2: Only 23% of SEA organisations operate at a transformative AI level and just ~33% of employees report proper AI training; typical users save ~5.4% of weekly hours—small but scalable once training and workflow redesign are funded.

Governing Southeast Asia AI productivity for the long run

Sustained productivity improvements require policies that facilitate diffusion while minimizing harm. The World Bank’s work in East Asia and the Pacific highlights that skill policies, mobility, taxes, and social protections will determine whether technology promotes inclusion or inequality. In education, this means expanding from pilot programs to overarching curricula that include training in prompting, verification, and tool selection from upper secondary levels onward. In TVET systems, this involves establishing AI labs linked to local businesses and introducing stackable micro-credentials that align with real jobs. In universities, it means enforcing strict academic integrity policies while still allowing supervised AI use for drafting, coding, and analysis.

For administrators, procurement should focus on outcomes rather than just acquiring licenses. Contracts can stipulate vendor-funded training hours per license, workflow templates, and measurable time savings at six- and twelve-month intervals. Ministries could collaborate to establish regional standards for model safety, data governance, and interoperability, thereby reducing costs and risks for smaller institutions. The OECD's caution regarding uneven diffusion, combined with PwC's findings on wage premiums, supports reskilling subsidies tied to wages and mobility assistance, ensuring benefits don't just concentrate in already-advantaged areas. Lastly, infrastructure policy must remain practical. Reports on Thailand’s data center expansion and coverage of Indonesia’s hyperscale investments illustrate this point. Data centers are vital, but their social return depends on practical skills and open access. Otherwise, these power-hungry assets could remain unused while schools and clinics lack the necessary tools. The goal of governance is steady, inclusive, measurable, and sustainable improvement in Southeast Asia's AI productivity.

The key figures introduced at the beginning of this essay—greater productivity growth and a 56% wage premium in AI-focused roles—are not inevitable; they are invitations to action. They demonstrate what can be achieved when tools meet trained individuals and when work processes are restructured. In Southeast Asia, capital is flowing in through national data center initiatives and hyperscaler commitments for training. Research results consistently indicate that productivity increases most rapidly when newcomers learn quickly and when managers allocate resources for change. The region now faces a clear choice. It can view AI as a competition for hardware and settle for narrow benefits concentrated in a few companies and cities. Alternatively, it can prioritize people—teachers, nurses, programmers, clerks—and invest in the time, coaching, and standards necessary for practical tool usage. If it chooses the latter path, Southeast Asia's AI productivity could become a strong driver of the next growth cycle: compounding, inclusive, and evident in both paychecks and profits.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Adecco Group. (2024, October 17). AI saves workers an average of one hour each day (press release).
AP News. (2024, April 30). Microsoft will invest $1.7 billion in AI and cloud infrastructure in Indonesia.
Bain & Company; Google; Temasek. (2024). economy SEA 2024. Key highlights page.
BCG. (2024, June 26). AI at Work in 2024: Friend and Foe.
Deloitte. (2025). Generative AI in Asia Pacific (regional pulse).
IDC (commissioned by SAS). (2024, November 6). IDC Data & AI Pulse: Asia Pacific 2024 (SEA cut).
ILO. (2025, June 26). New initiative to boost green and digital skills in ASEAN construction.
McKinsey & Company. (2023, April 11). Why most digital transformations fail—and how to flip the odds.
NBER (Brynjolfsson, Li, & Raymond). (2023). Generative AI at Work (Working Paper No. 31161).
OECD. (2024). The impact of artificial intelligence on productivity, distribution and growth.
PwC. (2025, June 3/26). Global AI Jobs Barometer press materials (productivity growth; 56% wage premium).
Reuters. (2025, March 17). Thailand approves $2.7 billion of investments in data centres and cloud services.
St. Louis Fed. (2025, February 27). The Impact of Generative AI on Work Productivity.
The Conversation (hosted via University of Melbourne). (2025, August 14/15). Does AI really boost productivity at work? Research shows gains don’t come cheap or easy.
World Bank. (2025, June 2). Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific; (2025, August 5) How new technologies are reshaping work in East Asia and Pacific.

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.