Skip to main content

The New Kilowatt Diplomacy: How Himalayan Hydropower Is Being Built for AI

The New Kilowatt Diplomacy: How Himalayan Hydropower Is Being Built for AI

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI data centers drive Himalayan hydropower
Buildout deepens China–India water tensions
Demand 24/7 clean, redundant AI power

China is betting big on hydropower to fuel a new era of AI and digital growth. Its new dam on the Yarlung Zangbo (Brahmaputra River downstream) is projected to generate 300 billion kilowatt-hours per year—roughly the UK’s annual power use. If built, it would become the world's largest dam system, leveraging the steep Tibetan gorge for maximum energy generation. With costs topping $170 billion and operations targeted for the 2030s, the project underscores China’s strategy to pair frontier energy with next-generation computing. Beijing says it’s clean energy and good for the economy. But look closer: global data centers consumed around 415 TWh in 2024, with China accounting for a quarter and growing fast. This isn’t just about powering homes and factories. It’s about moving data centers to the Himalayas to harness hydro power—and the political power that comes with it.

Why Himalayan Hydro Power Data Centers Matter

Himalayan hydro power data centers are at the heart of a new global energy race for AI dominance. The International Energy Agency forecasts that data centers could demand up to 945 TWh by 2030—a leap from today. China and the US will drive most of this surge. For China, shifting computing to inland areas with abundant hydro power is a deliberate move to future-proof its AI ambitions. Massive new dams are less about local power needs and more about securing dedicated, reliable energy for AI infrastructure, creating a critical edge as energy becomes the digital world’s most significant constraint.

China’s already putting things in motion. They shipped some giant, fancy turbines to Tibet for a hydro power station that can handle the large elevation drop. News reports say they’re super-efficient and can create stations that make a ton of power. They also touted Yajiang-1, an AI computing center high in the mountains, as part of their East Data, West Computing plan. Even if some claims are overhyped, the message is clear: put computing where there’s tons of power and easy cooling, and back it all up with hydro power data centers in the Himalayas. That way, you lose less electricity in transmission, don’t need as much cooling, and you can lock in long-term, cheap power contracts because there’s lots of water available. At least for now.

Environmental, safety, and political risks converge around Himalayan hydro power data centers. Scientists predict worsening floods on the Brahmaputra River as glaciers melt, while downstream nations worry that even unintended Chinese actions could disrupt water flow and sediment transport. The site’s seismic instability adds another layer: as vital computing and power lines depend on these mountains, local disasters can quickly escalate to cross-border digital crises. Thus, these data centers are not only regional infrastructure—they are matters of national security for every nation that relies on this water and power.

Figure 1: A single Himalayan cascade rivals a nation’s yearly power use and covers a large slice of global AI demand growth.

China’s Reasoning

China’s reasoning is simple. It needs lots of reliable, clean energy for AI, cloud computing, and its national computer systems. In 2024, Chinese data centers already used a ton of power, about a quarter of the world’s total. That’s growing fast, and experts say China could add significantly more data-center demand by 2030, even with improved energy efficiency. Areas near the coast are running out of power, freshwater for cooling, and cheap land. But western and southwestern China have hydroelectric power, wind power, high-altitude free cooling, and space to build. The government has spent billions on inland computing centers since 2022 to move data west. So, building huge dams in Tibet seems less like a vanity project and more like a way to support its AI industry in the long term.

Beijing also sees it as a way to gain leverage. A dam system making 300 TWh is like having the entire UK’s electricity supply at your fingertips. That means the dams can stabilize power in western China, meet emissions goals, power data centers, and help export industries that want to use green energy. Chinese officials say the dam won’t harm downstream countries, and some experts say most of the river’s water comes from monsoon rains farther south. But trust is low. China hasn’t signed the UN Watercourses Convention and prefers less binding agreements, such as sharing data for only part of the year. India says data sharing for the Brahmaputra River has been suspended since 2023. Without a real agreement, even clean power looks like a political play.

The new turbines and AI computing centers in Tibet add a digital twist to the water issue. News reports talk about the Yajiang-1 center as green and efficient. State media keeps pushing the idea of moving data west as a way to help everyone. Put it all together, and it shows that computing follows cheap, clean power, and that China will handle things according to its own plans, not under outside pressure. That’s why it’s hard for other countries to step in. The real decisions are being made in line with China’s own goals, such as grid development, carbon targets, and its chip and cloud plans. Which is where Himalayan hydro power data centers fit in.

India’s Move

India can’t ignore what’s happening upstream. It has told China that it’s concerned and is working on its own hydropower and transmission projects in the Brahmaputra River area. In 2025, the government announced a major plan to generate significant power in the northeast by 2047, with many projects underway. The reasoning is both strategic and economic: establish its own water rights, make sure there’s enough water during dry seasons, and support Indian data centers and green industries in the region. This is happening! India’s data-center electricity use could triple by 2030 as companies and government AI projects grow. That means they’ll want more clean power contracts.

Figure 2: India’s AI build-out more than triples data-center share by 2030; siting and 24/7 clean contracts now set long-run grid shape.

Officials in India say the national power grid can handle the increased demand and have mentioned pumped-storage and renewable energy projects to support new data centers. However, India and China share the same water source and face similar earthquake and security risks, meaning that actions taken by either country can directly affect the other. India’s response has included building its own dams, such as the large one in Arunachal Pradesh, even though local people are worried about landslides and being displaced. India has also encouraged Bhutan to sell more power to the Indian grid and has raised concerns about China’s dam with other countries. While these efforts may address supply concerns, they do not reduce the broader risk: without a clear treaty or enduring data-sharing agreements with China on the Brahmaputra, both countries' digital economies remain vulnerable to disputes, natural disasters, or intentional disruptions.

India also has opportunities. It can create a reliable green-computing area that combines hydro power and pumped storage in the northeast with solar power in Rajasthan and offshore wind in Gujarat and Tamil Nadu, connected by high-capacity power lines. Data centers will still use a small share of power in 2030, but the choices made now will affect things for decades. If India’s power rules and markets support energy-efficient designs and continuous clean power, then those data centers can grow without facing new issues later on. That’s how India proactively takes the driver’s seat, putting in a compute-industrial policy.

What Needs to Change

Education and government leaders should understand that these are not distant energy concerns. If AI computing increasingly depends on Himalayan rivers, then any disruption—such as a natural disaster or power outage in the region—could directly affect universities, laboratories, schools, and their digital operations. Risk to the water supply becomes risk to digital learning and research capacity.

First, purchasing practices need to change. Institutions that buy cloud and AI services should ask where the energy comes from, how it aligns with real-time energy use, and how it affects river systems. Experts think data centers will use way more power by 2030, and AI is a big part of that. Contracts should favor companies that can prove they use real-time clean power and very little fresh water, specifically in areas where water is scarce. River risk is now a digital risk and should be mentioned in agreements.

Second, educators should be ready for climate-related computer problems. The Brahmaputra River basin is expected to experience longer, more intense floods. An earthquake or flood that knocks out a power station shouldn’t shut down school platforms, test systems, or hospital servers in another state. Governments should make sure there’s backup in different areas to simulate the Tibet outage.

Third, realize the limits of diplomacy. China didn’t vote for an agreement and prefers data sharing, while the US and Japan can raise concerns, support other options, and help India and Bhutan build their capabilities. This means the grid must reform, and a transparent source of AI power is required.

Finally, we need better statistics. China’s data center usage was a lot in 2024. India’s numbers are uncertain, too. Those who use heavy AI should provide public energy usage data, along with hourly data usage from the location. Green AI is a slogan without stats, and without public energy disclosure, ministries can steer work towards manageable grids. These things minimize the mountain power data centers compared to speeches and regional peace.

AI isn’t separate from these decisions. Himalayan hydro power data centers operate differently in India and China. There’s an old rivalry with no treaty to ease shocks, as shifting glaciers affect the flow, while outside powers have no control over domestic needs. The real test ahead is for decision-makers—procurement desks, planners, and educators—to secure digital and energy futures in the face of these growing risks. The choices made now will impact not just river flow, but the reliability and equity of AI for generations to come.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

IBEF. (2025, Oct 29). India’s data center capacity to double by 2027; rise 5x by 2030.
IEA. (2025). Electricity 2025 — Executive summary.
IEA. (2025). Energy and AI — Data-center and AI electricity demand to 2030.
Reuters. (2024, Dec 26). China to build world’s largest hydropower dam in Tibet.
Reuters. (2025, Jul 21). China embarks on world’s largest hydropower dam in Tibet.
S&P Global Market Intelligence. (2025, Sept 17). Will data-center growth in India propel it to global hub status?
Sun, H., et al. (2024). Increased glacier melt enhances future extreme floods in the southern Tibetan Plateau. Journal of Hydrology: Regional Studies.

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Stop Chasing Androids: Why Real-World Robotics Needs Less Hype and More Science

Stop Chasing Androids: Why Real-World Robotics Needs Less Hype and More Science

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Humanoid robot limitations endure: touch, control, and power fail in the wild
Hype beats reality; only narrow, structured tasks work
Fund core research—tactile, compliant actuation, power—and use proven task robots

We don't have all-purpose robots because building them is more complicated than we thought. The world just keeps messing them up. Get this: about 4 million robots work in factories now—a record high!—but none of us see a humanoid robot fixing stuff around the house. Factory robots kill it with set tasks in controlled areas. Humanoids? Not so much when things get messy or unpredictable. That difference is a big clue. Sure, investors poured billions into humanoid startups in 2024 and 2025, and some companies were valued at insane levels. Reality check, though: dexterity, stamina, and safety are still big problems that haven't been solved. Instead of cool demos, we need to fund research into touch, control, and power. Otherwise, humanoids are stuck on stage.

Physics Is the First Problem for Humanoid Robots

Our bodies use about 700 muscles and receive constant feedback from sight, touch, and our own sense of position. Fancy humanoids brag about 27 degrees of freedom—which sounds cool for a machine, but it's nothing compared to us. It's not just about the number of parts. It's about how muscles and tendons stretch and adapt. And we sense way more than any robot. Even kids learn more, faster. A four-year-old has probably taken in 50 times more sensory info than those big language models, because they're constantly learning about the world hands-on. Simple: motors aren't muscles, and AI trained on text isn't the same as a body taught by the real world.

The limitations become evident when nuanced manipulation is required. Humanoid robots typically perform well on demonstration floors, but residential and workplace environments introduce variables. Friction may vary, lighting conditions can shift, and soft materials create obstacles. A robotic hand incapable of detecting subtle slips or vibrations will likely fail to retain objects. While simulation is beneficial, real-world deployment exposes compounding errors. Industry proponents acknowledge that current robots lack the sensorimotor skills that human experience imparts. Therefore, leading experts caution that functional, safe humanoids remain a long-term goal due to persistent physical challenges. Analytical rigor—not rhetoric—is necessary to address these realities.

Figure 1: Most new robots go to highly structured factories: Asia leads by a wide margin, underscoring why hype about general-purpose humanoids hasn’t translated to daily use.

The Economics Problem

Money doesn't solve everything. Figure AI got $675 million in early 2024 and was valued at $39 billion by September 2025. But cash alone won't turn a demo into a reliable worker. Amazon tested a humanoid named Digit to move empty boxes. That's something, but it proves that easy, single-purpose jobs in controlled areas are still the only wins right now. General-purpose work? Not there yet. It all depends on robots working reliably on different tasks. But so far, it's just limited tests.

Then there's the energy issue. Work shifts are long, but batteries aren't. Agility says its new Digit lasts four hours on a charge, which is okay for set tasks with charging breaks. But that's not an eight-hour day in a messy place, let alone a home. Charging downtime, limited tasks, and a human babysitter make the business case weak. The robot market is growing, sure, but where conditions are perfect. Asia accounted for 70% of new industrial robot installations in 2023. But that doesn't mean humanoids are next; it just means tasks and environments need to be simple.

Quality counts too. Experts say that China's push for automation has led to breakdowns and durability issues, especially when parts are cheap. A recent analysis warned that power limits, parts shortages, and touch-sensitivity problems are blocking progress, even though marketing says otherwise. This doesn't mean there's no progress, but don't get carried away. Ignore this, and we'll waste money on demos instead of real research.

The Safety Problem

Safety isn't just a feeling. It's about standards. Factory robots follow ISO 10218, updated in 2025, which focuses on the whole job, not just the robot. Personal robots follow ISO 13482, which addresses risks and safety measures related to human contact. These aren't just rules. When a 60–70 kg machine with bad awareness falls over, it's dangerous, no matter how good the demo looks. Standards change slowly because people get hurt faster than laws can catch up.

That's why we should listen to the cautious voices. In late 2025, Rodney Brooks said we're over ten years from useful, minimally dexterous humanoids. Scientific American agrees: It's not just one missing part keeping humanoids out of our lives, but a lack of real-world smarts. If we make decisions based on flashy demos, we'll underfund research into touch, movement, and contact, where the real safety lies. The people who make standards will notice. We should too.

What to Fund: Research, Not Demos

The quickest way to address issues with humanoid robots is to improve their movement. Human muscles adjust to stiffness and give way a little. Electric motors don't. We need variable-impedance systems, tendon-driven designs, and soft robots that give way on contact. The goal isn't cool moves, but coping with the messy world. Tie funding to damage reduction, not cheering. If a hand can open a jar without smashing it—or know when to give up—that's worth paying for.

Next is touch. High-resolution touch sensors will change the way robots grab, more than hours of watching YouTube videos. We need lots of varied touch data, like a kid learning by making a mess. As LeCun says, without more data, robot smarts will stall. The answer is on-device data collection in living labs that look like kitchens and classrooms—with simulators that get friction right. Otherwise, the real world will always be out of reach.

Figure 2: Installations stayed above half a million for three straight years—evidence of steady, structured automation rather than a surge in general-purpose humanoids.

Lastly, we need stamina. Warehouse work is long; homes are messy. Up to four hours of battery life is a start, but it limits what robots can do. Improving energy density is slow. And safety around heat and charging is vital. We need research into battery design, safe, fast charging, and graceful ways to shut down when power drops. And we need rules that punish duty-cycle claims that don't hold up. We should measure time between breakdowns, not marketing hype.

What should schools do?

Stop selling students on Android fantasies. Teach controls, sensing, safety, and testing. These skills apply to mobile robots, surgery systems, and factory automation, where the jobs are. Second, build industry labs where students test products on real materials and in messy conditions, with safety experts on hand. Third, reward designs that fail safely and recover, not just ones that work once. We need engineers who can turn cool demos into working systems.

Admins should use robots only for clear, low-risk tasks. Assign humanoids to simple workflows, as Amazon does, where mistakes are easy to fix. Invest the rest in other proven automation—like AMRs, fixed arms, and inspection systems. Use complex data—uptime, breakdowns, accidents—to guide decisions. Halt projects that don’t lead to reliable production. Politicians should back robotics projects with open results, not just hype. Ensure funding and incentives are tied to reporting failures, safety audits, and real-world testing. Support the development of sensors, actuators, and power systems that benefit many users. Advocate strict safety limits for heavy robots until risks are proven manageable.

Here's the practical test: The next three years, if we can build a robot that can fold T-shirts, carry a pot without spilling, and recover from a fall without help, we're onto something. These tasks aren't fun, but they touch on the physics, sensing, and control needed to deal with the real world. Solve them, and a safer service is possible. Skip them, and the hype will keep going, louder but not smarter. Let's be real. Investors will keep hyping. Press releases will hype robots. Some tests will work. But the robot future is still focused on machines doing specific jobs in specific places. That's valuable. It boosts productivity in boring places that pay the bills. The all-purpose humanoid will appear only when we earn it, by funding the unglamorous science of contact, control, and power.

Millions of robots are now in use, but almost none are humanoids in public. That's not a lack of imagination; it's reality. In robotics, the action is where limits can be designed and tested. The hype is somewhere else. If we want safe androids someday, we must change the system. Support touch sensors that last, actuators that give way, and batteries that work without trouble. Demand real tests, not just videos. Treat humanoid issues as research, not branding. Do this, and we might see a robot we can trust on the sidewalk.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Agility Robotics. “Agility Robotics Announces New Innovations for Market-Leading Humanoid Robot Digit.” March 31, 2025.
Amazon. “Amazon announces 2 new ways it’s using robots to assist employees and deliver for customers.” Company news page.
International Federation of Robotics (IFR). “Industrial Robots—Executive Summary (2024).”
International Federation of Robotics (IFR). “Record of 4 Million Robots in Factories Worldwide.” Press release.
Innerbody. “Interactive Guide to the Muscular System.”
Reuters. “Robotics startup Figure raises $675 mln from Microsoft, Nvidia, other big techs.” Feb. 29, 2024.
Reuters. “Robotics startup Figure valued at $39 billion in latest funding round.” Sept. 16, 2025.
Rodney Brooks. “Why Today’s Humanoids Won’t Learn Dexterity.” Sept. 26, 2025.
Scientific American. “Why Humanoid Robots Still Can’t Survive in the Real World.” Dec. 13, 2025.
The Economy (economy.ac). “China tops 2 million industrial robots, but quality concerns persist.” Nov. 2025.
The Economy (economy.ac). “Humanoid Robots and the Division of Labor: bottlenecks persist.” Nov. 2025.
TÜV Rheinland. “ISO 10218-1/2:2025—New Benchmarks for Safety in Robotics.”
Unitree Robotics. “Unitree H1—Full-size universal humanoid robot (specifications).”
Yann LeCun (LinkedIn). “In 4 years, a child has seen 50× more data than the biggest LLMs.”

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.