Take It Down, Keep Moving: How to Take Down Deepfake Images Without Stalling U.S. AI Leadership
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
48-hour takedowns for non-consensual deepfakes
Narrow guardrails curb abuse, not innovation
Schools/platforms: simple, fast reporting workflows
Deepfake abuse is a vast and growing problem. In the first half of 2025, the U.S. CyberTipline received 440,419 reports alleging that generative AI was used to exploit kids. That's a 64 times increase from 2024! This causes harm every single day, so we really need to put some protections in place, and fast. The Take It Down Act aims to quickly remove deepfake images without disrupting America's AI progress. Some people worry that rules might slow things down and let China get ahead. But we need to make sure tech improves while keeping people safe. The Act shows we can protect victims and encourage AI leadership at the same time.
What the Act Does to Get Rid of Deepfake Images
This law is all about what works. It makes it illegal to share private images online without permission, including real photos and AI-generated ones. Covered platforms have to create a simple system for people to report these images so they can be taken down. If a victim sends a genuine request, the platform must delete the content within 2 days and do its best to remove any copies it knows about. The Federal Trade Commission can punish those who aren't following the rules. The penalties are harsher if minors are involved, and it's illegal to threaten to publish these images, too. It's all about promptly removing deepfake images once a victim speaks up. Platforms have until May 19, 2026, to get this system up and running, but the criminal stuff is already in place.
The details matter. Covered platforms are websites and apps that are open to the public and primarily host user-generated content. Internet providers and email are omitted. The Act has a good-faith clause, so platforms that quickly remove stuff based on clear evidence won't get in trouble for taking down too much. It doesn't replace any state laws, but it adds basic protection at the federal level. Basically, Congress is bringing the straightforward approach of copyright takedowns to a situation where delays can cause significant problems, especially for children. This law is targeting things that can't be defended as free speech. It's about non-consensual, sexual images that cause real harm. That's why, unlike many other AI proposals in Washington, this one became law.
What the Law Can and Can't Do to Remove Deepfake Images
The Act is a response to a significant increase in AI being used for exploitation. Getting stuff taken down fast is super important because a single file can spread quickly across the internet. The two-day rule helps prevent lasting damage. The urgency of the Act shows just how much harm delayed removal causes victims.
But a law that focuses on what we can see might miss what's going on behind the scenes. Sexual deepfakes grab headlines. Discriminatory algorithm decisions often don't. If you get turned down for credit, housing, or a job by an automated system, you might not even know a computer made that decision or why. Recent studies suggest that lawmakers should take these less-obvious AI harms just as seriously as the more visible ones. They suggest things like impact assessments, documentation, and solutions that have as much power as what this Act gives to deepfake victims. If we don't have these tools, the rules favor what shocks us most over what quietly limits opportunity. We should aim for equal protection: quickly remove deepfake images and be able to spot and fix algorithmic bias in real time.
Figure 1: A sixty-four-fold jump in six months shows why a 48-hour clock matters.
There's also a limit to how many takedowns can do. The open internet can easily get around them. The most well-known non-consensual deepfake sites have shut down or moved under pressure, but the content ends up on less-obvious sites and in private channels, and copies pop up all the time. That's why the Act says platforms need to remove known copies, not just the original link. Schools and businesses need ways to go from the first report to platform action to getting law enforcement involved when kids are involved. The Act gives victims a process at the federal level, but these place needs a system to support it.
Innovation, Competition, and the U.S.–China Dynamic
Will this takedown rule slow down American innovation? The stats say no. In 2024, private AI investment in the U.S. hit \$109.1 billion, almost twelve times China's \$9.3 billion! The U.S. is also ahead in creating important models and funding AI projects. A clear rule about non-consensual private images won't undermine those numbers. Good rules actually reduce risks for place that need to use AI while following strict legal and ethics guidelines. Safety measures aren't bad for innovation; they encourage responsible use.
A lot is happening with laws in the U.S. Right now. In 2025, states were considering many AI-related bills. A lot of them mentioned deepfakes, and some became law. Many are specific to certain situations, some don't agree with each other, and a few go too far. This could be a problem if it leads to changing targets. The government has suggested one-rulebook ideas to limit what states can do, but businesses disagree on if that would help. What we should be doing is improving coordination. Keep the Take It Down rules as a foundation, while adding requirements based on risks. Stop state measures only when they really cause conflicts. Focus on promoting what's clear, not what's confusing.
Figure 2: U.S. private AI investment dwarfs peers, showing clear guardrails won’t derail competitiveness.
What Schools and Platforms Should Do Now to Remove Deepfake Images
Schools can start by sharing a simple guide that tells people how to report incidents and includes legal information in digital citizenship programs. Speed and guidance are key, connecting victims to counseling and law enforcement when needed.
Platforms should use the next few months to test things out. Pretend to receive a report, check it, remove the file, search for and delete identical copies, and document everything. Take the takedown process to all areas where content spreads, like the web, mobile, and apps. If your platform allows third-party uploads, make sure one victim report reaches every location where the host assigns an internal ID. Platforms should release brief reports on response times as we get closer to May 19, 2026. The extra hour you save now could prevent problems later. For smaller platforms, following the pattern from DMCA workflows can lower risks and engineering costs.
Beyond Takedown: Building More Trust
The Act gives us a chance. Use it to raise the standards. First, invest in image authentication where it is most important. Schools that use AI should use tools that support tracking image origin. This won’t spot everything, but it will link doubts to data. Second, make sure people can see when an automated tool was used and challenge it where it is essential. This is like what the Act grants to deepfake victims: a way to say, That's wrong, fix it”. Third, treat trust work as a means of growth. America doesn't fall behind by making places safer. It gains adoption and lowers risks while keeping the pipeline of students and nurses open.
The amount of deepfakes will increase. That shouldn’t stop research. People should ensure that clear notice forms are in place and uphold a duty of care for kids. The Act does not solve every AI problem. But by prioritizing response & transparency, we can build trust in AI’s potential. Take steps now to show your commitment to innovation.
We started with a scary issue. This underlines the need for action. The Take It Down Act requires platforms to remove content. The law is clear. It won't fix it, others will need to be addressed. It will not end the battle between creators & censors. But it will allow control & and we can remove deepfake images. This is we should seek: a cycle of trust
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Brookings Institution. (2025, December 11). Addressing overlooked AI harms beyond the TAKE IT DOWN Act. Congress.gov. (2025, May 19). S.146 — TAKE IT DOWN Act. Public Law 119-12. Statute at Large 139 Stat. 55 (05/19/2025). MultiState. (2025, August 8). Which AI bills were signed into law in 2025? National Center for Missing & Exploited Children. (2025, September 4). Spike in online crimes against children a “wake-up call”. National Conference of State Legislatures. (2025). Artificial Intelligence 2025 Legislation. Skadden, Arps, Slate, Meagher & Flom LLP. (2025, June 10). ‘Take It Down Act’ requires online platforms to remove unauthorized intimate images and deepfakes when notified. Stanford Human-Centered AI (HAI). (2025). The 2025 AI Index Report (Key Findings).
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
Teen chatbot safety is a public-health issue as most teens use AI daily
Adopt layered age assurance, parental controls, and a school “teen mode” with crisis routing
Set a regulatory floor and publish safety metrics so safe use becomes the default
Here’s a stat that should grab your attention: around 64% of U.S. teens are using AI chatbots now, and about a third use them every day. So, teen chatbot safety isn’t a small thing anymore; it’s just part of their daily online lives. When something becomes a habit, little slip-ups can turn into big problems: like bad info about self-harm, wrong facts about health, or a late-night so-called friend that messes up ideas about consent.
The real question isn’t if teens are using chatbots—they clearly are—but if schools, families, and the platforms themselves can come up with rules that keep up with how fast and intensely these tools are being used. We need to switch from just letting anyone jump in with no rules to having a more protective approach focused on stopping problems before they start. We can’t just say it’s good enough for adults. It needs to be safe enough for kids. That bar needs to be higher, and it should’ve been raised already.
Why Teen Chatbot Safety is a Health Concern
Teens aren’t just using these things for homework anymore; they’re using them for emotional support. In the UK, the numbers show that one in four teens asked AI for help with their mental health in the last year, usually because getting help from real people felt slow, judgmental, or harsh. A U.S. survey shows most teens have messed around with chatbots, and a bunch use them daily. This combo of using them a lot and having secret chats makes things risky when the chatbot answers wrong, isn’t clear, or acts too sure of itself. It also makes it seem normal to have a friend who’s always there. This isn’t harmless; it changes how teens think about advice, being on their own, and asking for help.
If something has so much influence, then it needs safety measures built in. The proof is there now: teens are using chatbots a lot and for serious stuff. Policies have to treat this as ongoing, not just as an app you download once. A serious event prompted people to demand action. In September 2025, a family said their teen’s suicide came after months of talking to a chatbot. After that, OpenAI said it would check ages and give answers made for younger users, mainly dealing with thoughts of suicide. This came after earlier talk about maybe telling the authorities if there were signs of self-harm. That was a touchy subject, but it showed that this whole thing has risks.Heavy, unsupervised use when you’re young and unsure—that’s what makes persuasive but sometimes wrong systems risky. If the main user is a kid, then the people in charge need to do what they can to keep them away from stuff that could hurt them.
Figure 1: Daily use is common and intense: 28% use at least weekly and 24% use daily, with 4% almost constantly—while 36% still don’t use chatbots at all.
From Asking Your Age to Actually Knowing It: Making Teen Chatbot Safety Work
There are some basic rules already. Big companies make you be at least 13 to use their stuff, and if you’re under 18, you need a parent’s okay. In 2025, OpenAI introduced parental controls so parents and teens could link accounts, set limits, block features, and add extra safety. The company also discussed using a system to guess your age and check IDs so users get the correct settings and less risky content. These are must-haves; they’re better than just asking people to tell on themselves. Of course, getting consent has to mean something. Controls have to be on point and tough to get around. Mainly, protecting kids should be the priority, especially at night when risky chats tend to happen.
Figure 2: Black and Hispanic teens are ~50% more likely than White peers to use chatbots daily, sharpening the case for teen-mode guardrails that work across contexts.
Knowing a user’s age should be a system, not just a one-time thing. Schools and families need different layers of protection, including account settings, device rules, and safety features for younger users. Policies should ensure you’re checking for risks in a real way, including spotting potential failures (for example, incorrect information), putting controls in place, measuring how things are going, and making changes when needed.
In other words, safety for teens on chatbots should mean stricter rules for touchy subjects, set times when they can’t be used, blocking lovey-dovey or sexual stuff, and making it easy to get help from a real person if there’s a serious problem—all while keeping their info private and not causing unnecessary alarms. The standard should be pretty darn sure, and you should have to prove it. All these rules are key to making sure promises turn into real action.
A Schools First Idea for Teen Chatbot Safety
Schools are right in the middle of all this, but policies often treat them like they don’t matter. Surveys from 2024 and 2025 found that many teens are using AI to help with schoolwork, often without teachers even knowing. A lot of parents think schools haven’t said much about what’s okay and what’s not. We can fix this gap. School districts can put a simple, doable plan in place: have different levels of access based on what you’re doing, have people check things over, and have clear rules about data.
What I mean is, there should be one mode for checking facts and writing stuff with sources, and another, more strict mode for deep or emotional questions. A teacher should be able to see what the student is doing, what they’re asking about, and a short reason why it’s okay on their device. Data rules need to be clear: don’t train the AI using student chats, keep logging to a minimum, and have set rules for deleting data. Schools are already doing this for search engines and videos, so they should do it for chatbots too.
School safety for teen chatbots should also include social skills, not just tech barriers. Health teachers should show how to find proof and spot wrong answers. Counselors should deal with risks flagged by the tools. School leaders should set rules about using them at night related to going to class and feeling okay. Companion chatbots need extra thought. Studies show that teens can become emotionally attached to fake friends, which can blur boundaries and lead to advice that encourages controlling behavior. Schools can ban romantic companions on devices they control while still allowing educational tools, explaining why: consent, feeling for others, and real-world skills don’t come from being alone. It’s not about scaring kids; it’s about guiding them toward learning and keeping them away from the parts of adolescence that are easiest to mess with.
Thinking About the Criticisms—and How to Answer Them
Criticism one: Checking ages might invade privacy and mislead users. That’s true, so the best designs should start with predicting ages using the device, use rules that can be adjusted, and only check IDs when there’s a serious risk, and you have permission. There are clear guides on making sure the methods fit the risk: reduce risks using the least intrusive ways that still get the job done, keep track of mistakes, and write down the trade-offs. Schools can further reduce risk by ensuring the checks aren’t part of a student’s school record and by not saving raw images when estimating faces. Telling everyone what’s going on is key: share the error rates, tell teens and parents how they can argue a decision, and allow people to report misuse secretly. If we do these things, teen chatbot safety can move forward without turning every laptop into a spy tool.
Criticism two: Strict safety measures might make inequality even worse, since students with their own devices might get around the school rules and end up doing better. The answer is to create safe access, the easiest way to do things everywhere. Link school accounts to home devices using settings that move with them, apply the same modes and quiet hours across the board, and don’t train the AI on student data so families can trust the tools. When the simplest thing is also the safest, fewer students will try to cheat the system.
And the idea that safety measures stop learning just ignores what’s happening now: teens are already using chatbots for quick answers. A better design can raise the bar by asking for sources, showing different points of view, and controlling the tone when dealing with touchy subjects involving kids. Everyone does better when every student has access in a set way, rather than when some people can buy their way around the safety measures.
What the People in Charge Should Do Next
To policymakers, that reads: Do it now—make sure any chatbot teens use meets these three rules. First, make age checks real: start with the least nosey way possible, and only go further as needed. Also, have independent audits to find any biases and mistakes. Second, make sure there’s a teen mode that blocks certain features, uses a calmer tone for emotional times, strictly blocks sexual stuff and companions, and makes it easy to get help from a real person in an emergency. Third, demand tools that are ready for schools: admin controls, limited access based on the task, minimal data, and a public safety card that outlines the protections for teens.
These protections ensure older teens can access more advanced content with permission, while still guaranteeing the tools fit what teens need. Refer to risk management guides in all rules so agencies know how to monitor for compliance. Do it now to protect teens in their online lives.
Platforms should share their youth safety numbers regularly. How often do the teen-mode models block stuff they shouldn’t? How usually do crises come up, and how fast do real people jump in? What’s the rate of getting ages wrong? What are the top three areas where teens are getting wrong info, and what fixes have been made? Without these numbers, safety is just a way to sell something. If we share these numbers, safety becomes real. Independent researchers and youth safety groups should be part of the process.
Current work from people who study cyberbullying shows the new risks of companion interactions and sets a plan for schools to research this. Policymakers can provide funding to connect school districts, researchers, and companies to test teen modes in the real world and compare outcomes across different safety plans. The standard should be something everyone can see, not just something some company owns: Teen chatbot safety gets better when problems are out in the open.
The truth is, chatbots are now part of the daily lives of millions of teens. That’s not going to change. The only choice we have is whether we create systems that admit teens are using these things and do a good job of dealing with that. We already have everything we need: age checks that are better than just clicking a box, parental controls that actually link accounts, school systems that block risky stuff while pushing good habits, and audits to make sure the guardrails are doing their job. This isn’t about attacking technology. It’s just about being responsible and making sure things are safe in a world where advice can come instantly and sound super sure. Set the bar. Measure what matters. Share what you find. If we do all this, teen chatbot safety can move from being just a catchy phrase to being something real. And the odds of seeing news stories about problems we could have stopped will go way down.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Common Sense Media. (2024, June 3). Teen and Young Adult Perspectives on Generative AI: Patterns of Use, Excitements, and Concerns. Common Sense Media. (2024, September 18). New report shows students are embracing artificial intelligence despite lack of parent awareness and… Common Sense Media. (2024). The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School. Cyberbullying Research Center. (2024, March 13). Teens and AI: Virtual Girlfriend and Virtual Boyfriend Bots. Cyberbullying Research Center. (2025, January 14). How Platforms Should Build AI Chatbots to Prioritize Youth Safety. NIST. (2024). AI Risk Management Framework and Generative AI Profile. U.S. Department of Commerce. OpenAI. (2024, Dec. 11). Terms of Use (RoW). “Minimum age: 13; under 18 requires parental permission.” OpenAI. (2024, Oct. 23). Terms of Use (EU). “Minimum age: 13; under 18 requires parental permission.” OpenAI. (2025, Sep. 29). Introducing parental controls. OpenAI Help. Is ChatGPT safe for all ages? “Not for under-13; 13–17 need parental consent.” Pew Research Center. (2025, Dec. 9). Teens, Social Media and AI Chatbots 2025. Scientific American. (2025). Teen AI Chatbot Usage Sparks Mental Health and Regulation Concerns. The Guardian. (2025, Sep. 11). ChatGPT may start alerting authorities about youngsters considering suicide, says Altman. The Guardian. (2025, Sep. 17). ChatGPT developing age-verification system to identify under-18 users after teen death. TechRadar. (2025, Sep. 2 & Sep. 29). ChatGPT parental controls—what they do and how to set them up.
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
US fiscal sustainability is strained; interest now tops defense
Italy rebounds with primary surpluses; France lags with 5%+ deficits
A near-term US primary surplus would stabilize debt and shield education
Xiangshan Forum is China’s planning room for global governance
Beijing fuses hard power and finance to shape rules
Education must teach Xiangshan-era governance, standards, and AI/security
EU freezes ~€210bn; windfall profits flow, not principal
Route proceeds via EU/G7 loans to steady, education-first support
Preserve trust: strict legality, transparency, shared risk
Beyond the AGI Hype: Why Schools Must Treat AI as Probability, Not Reasoning
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI today is pattern-matching, not reasoning—resist AGI hype
Redesign assessments for explanation, sources, and uncertainty
Procure on outcomes and risk logs; keep humans in charge
The core challenge facing schools this year is not whether to use AI, but how to understand what it is: an engine of sophisticated pattern prediction, not genuine reasoning. In the United Kingdom, 92 percent of university students now use generative AI for academic work, and 88 percent use it for assessments—evidence of changing day-to-day learning. The real risk is misinterpreting these tools as students do. If education systems treat probability engines as minds, they risk confusing fluency for proper understanding and speed for mastery. Policy should focus on what these systems actually do: they make predictions based on data, not on the way humans reason. By recognizing this, schools can make strategic decisions that enhance productivity and feedback without compromising standards or judgment.
AGI hype and the difference between probability and reasoning
Here is the main point. Today’s leading models mostly connect learned associations. Their outputs seem coherent because the training data are extensive and the patterns are rich. However, the mechanism remains statistical prediction rather than grounded reasoning. Consider two facts. First, training compute has exploded—roughly doubling every six months since 2010—because greater scale enables better pattern matching across many areas. Second, when evaluation is cleaned of leaks and shortcuts, performance declines. These two trends support a clear conclusion: gains come from more data, more compute, and more clever sampling at test time, not from a leap to general intelligence. Education policy should focus on robust probability rather than artificial cognition.
The phrase “reasoning models” adds confusion. Yes, some new systems take more time to think before answering and score much higher on complex math. OpenAI reports that its O1 line improved from 12 percent to 74 percent on the AIME exam with single attempts. That is real progress. However, it is narrow, costly, and sensitive to the choice of prompt design and evaluation. This tells us that staged searches over intermediate steps help with well-defined problems, not that the model has acquired human-like understanding. Schools should not confuse a better solver in contest math with a reliable explainer across complex, interdisciplinary tasks. In classrooms, we need tools that can retrieve sources, expose uncertainty, and withstand checks for contamination and hallucination. “More correct answers on a benchmark” is not the same as “more trustworthy learning support.”
Figure 1: Use jumped from 53% to 88% in one year—proof of mass adoption, not proof of reasoning
AGI hype in the funding cycle and what it does to classrooms
The money flowing into generative AI is significant and increasing. In 2024, private AI investment in the United States reached about $109 billion. Funding for generative AI climbed to around $34 billion worldwide, up from 2023. Districts are reacting: by fall 2024, nearly half of U.S. school districts had trained teachers on AI, a 25-point increase in just one year. Students are moving even faster, as the U.K. figure shows. This is the AGI hype cycle: capital fuels product claims; product claims drive adoption; and adoption creates pressure to buy more and make more promises. The danger is not in adoption itself. It is how hype blurs the line between what AI does well—summarizing, drafting, style transfer, code scaffolding—and what education needs for durable learning—argument, transfer, and critique.
Figure 2: Training more than doubles in a year and targets ~75% by 2025—capacity building is racing to catch up with hype
Skepticism has not slowed use. A recent national survey in the U.S. found that about 70 percent of adults have used AI, and around half worry about job losses. In schools, many teachers already report time savings from routine tasks. Gallup estimates that regular users save about 6 hours a week, about 6 weeks over a typical school year. Those are significant benefits. But productivity gains come with risks. NIST’s updated AI risk guidance emphasizes how models can be fragile and overconfident, with hallucinations persisting even in the latest systems. Education leaders should recognize both sides of that equation: significant time savings on one hand, and serious failure modes on the other, amplified by hype.
Design for probability: assessment, curricula, and guardrails
If we acknowledge that current systems are advanced probability matchers, then policy must shape the workflow to leverage probability when helpful and limit it when it causes harm. Start with an assessment. Exams that reward surface fluency invite model-aided answers that look right but lack genuine understanding. The solution is not a ban; it is stress-testing. The U.K. proposal for universities to stress-test assessments is a good example: move complex tasks into live defenses, stage drafts with oral checks, and require portfolios that document the provenance of artifacts. Retrieval-augmented responses with citations should be standard, and students should be able to explain and replicate their steps within time limits. This is not a nostalgic return to pen-and-paper. It aligns with how these tools actually function. They excel at drafting but struggle to provide verifiable chains of reasoning without support. Design accordingly—and name the approach for what it is: a straightforward resistance to AGI hype.
Curricula should make uncertainty visible. Teachers can create activities in which models must show their sources, state confidence ranges, and defend each claim against another model or a curated knowledge base. This is where risk frameworks come in. NIST’s guidance and Stanford’s HELM work suggest evaluation practices that assess robustness, not just accuracy, across tasks and datasets. Dynamic evaluation also counters benchmark contamination. When tasks change, memorized patterns break, revealing what models can and cannot do. Building this habit in classrooms helps students distinguish between plausible and supported answers. It also fosters a culture in which AI is a partner that requires examination. That is the literacy we need far more than prompt tricks.
Managing the next wave of AGI hype in policy and procurement
Education ministries and districts face a procurement challenge. Vendors use the language of “reasoning,” but contracts must focus on risk and value. A practical approach is to write requirements that reflect probability rather than personhood. Demand audit logs of model versions, prompts (the input commands or queries given to AI), and retrieval sources. Require default uncertainty displays (visual markers of answer confidence) and simple toggles that enforce citations in student-facing outputs. Mandate out-of-distribution tests (tests on data different from what the model saw in training) during pilot projects. Tie payment to improvements in verified outcomes—reducing time-to-feedback, gaining in transfer tasks (applying knowledge in new contexts), or fewer grading disputes—not to benchmark headlines. Use public risk frameworks as a foundation, then add domain-specific checks, such as content provenance (a clear record of the source material) for history essays and step-wise explanations (detailed breakdowns of each stage in a solution) for math. This is how we turn AGI hype into measurable classroom value.
Teacher development needs the same clarity. The RAND (Research and Development) findings show rapid growth in training across districts, but the training content often lags behind actual tool use. Teachers need three things: how to use AI to speed up routine work without losing oversight, how to create assignments that require explanation and replication, and how to teach AI literacy in simple terms. That literacy should cover both the social aspects—bias (unfair preferences in outputs) and fairness—and the statistical aspects—what prediction means, how sampling (choosing from possible AI responses) can alter results, and why hallucinations (AI-generated errors) occur. UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) guidance is clear about the need for human-centered use. Make that tangible with lesson plans that encourage students to challenge model claims against primary sources, revise model drafts with counter-arguments, and label unsupported statements. Time saved is real; so must be the structure ensuring learning.
What this means for research, innovation, and the road to real reasoning
Research is moving quickly. Google and OpenAI have announced significant progress in elite math competitions and complex programming tasks, often by increasing test-time compute and exploring more reasoning paths. These accomplishments are impressive. However, they also show the gap between success in contests and everyday reliability. Many of these achievements depend on long, parallel chains of sampled steps and expensive hardware. They do not transfer well to noisy classroom prompts about local history or unclear policy debates. As new benchmarks come out to filter out bias and manipulation, we see a more modest view of progress in broad reasoning. The takeaway for education is not to overlook breakthroughs, but to view them as examples under specific conditions rather than as guarantees for general use. We should use them where they excel—such as in math problem-solving and code generation with tests—while keeping human understanding at the forefront.
We started with the number 92. It highlights the reality that generative AI is already part of the assessment process. The risk is not in the tool itself; it is in the myth that the tool can now “reason” like a student or a teacher. In reality, today’s systems are extraordinary engines of probability. Their scale and intelligent searching have made them fluent, helpful, and often correct, but not reliably aware of their limitations. When refined evaluations eliminate shortcuts and leaks, performance falls in ways that matter for education. That is why policy should resist AGI hype and design for the systems we actually have. Build assessments that value explanation and verification. Procure products that log, cite, and quantify uncertainty by default. Train teachers to use AI for efficiency while teaching students to question it. Keep human judgment—slow, careful, and accountable—at the core. If we accomplish this, we can gain the benefits of productivity and feedback without compromising standards or trust. We can also prepare for the day when reasoning goes beyond a marketing claim—because our systems will be ready to test it, measure it, and prioritize learning first.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Axios. “Americans are using and worrying about AI more than ever, survey finds.” December 10, 2025. Deng, C. et al. “Investigating Data Contamination in Modern Benchmarks for Large Language Models.” NAACL 2024. Epoch AI. “The training compute of notable AI models has been doubling roughly every six months.” June 19, 2024. Gallup. “Three in 10 Teachers Use AI Weekly, Saving Six Weeks per Year.” June 24, 2025. NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0 update materials). July 25, 2024. NIST. “Assessing Risks and Impacts of AI (ARIA) 0.1.” November 2025. OpenAI. “Learning to reason with LLMs (o1).” September 12, 2024. RAND Corporation. “More Districts Are Training Teachers on Artificial Intelligence.” April 8, 2025. Stanford HAI. AI Index Report 2025. 2025. The Guardian. “UK universities warned to ‘stress-test’ assessments as 92% of students use AI.” February 26, 2025. UNESCO. “Guidance for generative AI in education and research.” Updated April 14, 2025.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Trump 2.0 tariffs: 10% floor, 15% for dealmakers
Campus costs climb—labs, AI gear, construction—equity gaps widen
Plan for persistence: pool procurement, secure exemptions, fund aid and labs
Benefits and working conditions make up a third to two-fifths of pay
Unions shift value into enforceable rights when cash is tight, boosting retention
Measure and fund non-monetary compensation to stabilize schools
Off-Planet Compute, On-Earth Learning: Why "Space Data Centers" Should Begin with Education
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Use space data centers to ease Earth’s compute and water strain—start with education
Run low-latency classroom inference in LEO; keep training and sensitive data on Earth
Pilot with strict SLAs, life-cycle audits, and debris plans before scaling
The key figure we should focus on is this: global data centers could use more than 1,000 terawatt-hours of electricity by 2026. This accounts for a significant share of global power consumption and is increasing rapidly as AI transitions from labs to everyday workflows. While this trend offers tangible benefits, it also creates challenges in areas with limited budgets, particularly in schools and universities. Water-scarce towns now face pressure from new server farms. Power grids can delay connections for years. Campuses see rising costs for cloud-based tools or experience slowdowns when many classes move online simultaneously. The concept of space data centers may sound like science fiction. Still, it addresses a pressing, immediate policy goal: expanding computing resources for education without further straining local water, land, and energy supplies. If we want safe, fair AI in classrooms, the real question is no longer whether the idea is exciting. The question is whether education should drive the initial trials, establish guidelines, and define the procurement rules that follow.
Reframing Feasibility: Space Data Centers as Education Infrastructure
Today's debate tends to focus on engineering bravado—can we lift racks into orbit and keep them cool? This overlooks the actual use case. Education needs reliable, fast, and cost-effective processing for millions of small tasks: grading answers, running speech recognition, translating content, and powering after-school AI tutors. These tasks can be spread out, stored, and timed. They do not all require immediate processing, as a Wall Street trader or a gamer does. The reframing is straightforward: view space data centers as a backup and support layer for public-interest computing. Keep training runs and the most sensitive student data on the ground. Offload bursty, repeatable processing jobs to orbital resources during peak times, nights, or exam periods. Education, not advertising or cryptocurrencies, is the best starting point because it offers high social returns, has predictable demand (at the start of terms and during exam periods), and can accept slightly longer processing times—if managed well—in many situations.
This viewpoint is critical now because the pressures on Earth-based resources are real. The International Energy Agency predicts that data center electricity use could surpass 1,000 TWh by 2026 and continue to rise toward 2030, even without a fully AI-driven world. Water resources are equally stretched. A medium-sized data center can consume about 110 million gallons of water per year for cooling. In 2023, Google alone reported over 5 billion gallons used across its sites, with nearly a third drawn from areas facing medium to high water shortages. When a district must choose between building a new school or a new power substation, the trade-off becomes significant. Shifting part of the computing layer to continuous, reliable solar power in orbit does not eliminate these trade-offs, but it can alleviate them if initial efforts prioritize public needs and include strict environmental accounting from the outset.
What the Numbers Say About Space Data Centers
Skeptics are right to ask for data, not just metaphors. Launch costs to put equipment into orbit have dropped, with SpaceX offering standard rates. Several companies are studying ways to make space data centers work and believe it could bring environmental benefits under certain conditions. Technology leaders are also researching prototypes that use solar energy and new communication methods. These projects are at an early stage, but offer an opportunity for policy planning to occur alongside engineering.
Latency is the second important metric for classrooms. LEO satellite networks already achieve median latencies in the 45-80 ms range, depending on routing and timing, which is comparable to many terrestrial connections. This is insufficient for high-frequency trading but acceptable for most educational technology tasks, such as real-time captioning and adaptive learning, when caching is used effectively. Peer-reviewed tests conducted in 2024-2025 show steady improvements in low-orbit latency and packet loss. The implication is clear: if processing is staged near ground points, and if content is cached at the orbital edge, numerous educational tasks can run without noticeable delays. Training large models will remain on Earth or in a hybrid cloud, where power, maintenance, and compliance are more manageable. However, the inference tier—the part that impacts schools—can be moved. This is where the new capacity offers the most support and causes the least disruption.
Figure 1: Global data center electricity use could more than double from 2022 to 2026, tightening budgets and grids that schools rely on; this is the core pressure space data centers aim to relieve.
Latency, Equity, and the Classroom Edge with Space Data Centers
The case for equity is strong. Rural and small-town schools often have limited access to reliable infrastructure. When storms, fires, or heat waves occur, they are the first to lose service and take the longest to recover. Space data centers could serve as a floating edge, keeping essential learning tools operational even when local fiber connections are down or power is limited. A school district could sync lesson materials and assessments with orbital storage in advance. During outages, student devices can connect via any available satellite link and access the cached materials, while updates wait until connections stabilize. For special education and language access, where speech-to-text and translation are critical during class, this buffer can make a major difference. The goal is to design for processing near content, rather than pursuing flashy claims about space training.
Figure 2: Low-Earth-orbit links now deliver classroom-ready median latencies; geostationary paths remain an order of magnitude slower—underscoring why education pilots should target LEO for inference.
The environmental equity argument is also essential. Communities near large ground data centers bear the burden of water usage, diesel backups, and noise. Moving some processing off-planet does not eliminate launch emissions or the risk of space debris, but it can reduce local stressors on vulnerable watersheds. To be credible, however, initial efforts should provide complete reports on carbon and water use throughout the life cycle: emissions from launches, in-space operations, de-orbiting, debris removal, and the avoided local cooling and water use. Educators can enforce this transparency through their purchasing decisions. They can require independent environmental assessments, mandate end-of-life de-orbiting plans, and tie payments to verified ecological performance rather than mere promises. When approached in this manner, space becomes a practical tool for relieving pressure on Earth as we develop more sustainable grids and regulations.
A Policy Roadmap to Test and Govern Space Data Centers
The main recommendation is to launch three targeted, controlled pilot programs over two school years to shape education technology proactively. The first pilot focuses on content caching, in which national education bodies and open-education providers pre-position high-use resources for reading support in orbit via low-orbit satellites, targeting under 100 ms latency and strict privacy. The second pilot tests AI inference by evaluating speech recognition, captioning, and formative feedback on orbital nodes, ensuring reliable terrestrial backups and maintaining logs for bias and error assessment. The third pilot provides emergency continuity during outages or storms, prioritizing students needing assistive tech. Each pilot includes a ground control group to measure actual educational gains and improvements in access, not just network metrics.
Procurement and governance must go hand in hand—take decisive steps to shape them now. Ministries and agencies should immediately design model RFPs that pay only for actual processing, limit data in orbit to 24 hours unless consent is given, and require end-to-end encryption managed on Earth. Insist that providers map education rules like FERPA/GDPR to orbital processes, enforce latency standards, and fully commit to zero-trust security. Demand signed debris-mitigation and de-orbiting plans in every contract and tie payments to verified environmental outcomes. Do not wait for commercial offers: by setting these requirements now, education can become the leader—and the primary beneficiary—in the responsible, innovative adoption of space data center technology.
The Market Will Come—Education Should Set the Terms
The commercial competition is intensifying. Blue Origin has reportedly been working on orbital AI data centers, while SpaceX and others are investigating upgrades that could support computing loads. Startups are proposing “megawatt-class” orbital nodes. Tech media often portrays this as a battle among large companies, but the initial steady demand may come from the public sector. Education spends money in predictable cycles, values reliability over sheer speed, and can enter multi-year capacity agreements that reduce risks for early deployments. The ASCEND study indicates feasibility; Google’s team has shared plans for a TPU-powered satellite network with optical links; academic research outlines tethered and cache-optimized designs. None of this guarantees costs will be lower than ground systems in the immediate future. Still, it presents a path for specific, limited tasks where the overall cost, including water and land, is less per learning unit. That should be the key measure guiding us.
What about the common objections? Cost is a genuine concern, but declining launch prices and improved packing densities change the game. A tighter focus on processing tasks and caching means less reliance on constant, high-bandwidth data transfers. Latency is manageable with LEO satellites and intelligent routing, as field data now shows median latencies in mature markets of tens of milliseconds. Reliability can be improved through backup systems, graceful degradation of ground systems, and resilience during disasters. Maintenance is a known challenge; small, modular units with planned lifespans and guaranteed de-orbit procedures mitigate that risk. And yes, rocket emissions are significant; this is where complete life-cycle accounting and limits on the number of launches per educational task must be included. The underwater Project Natick initiative offers a helpful analogy: careful design in challenging environments can lead to better reliability than on land. The same discipline should apply to space. If these conditions are met, pilots can advance without greenwashing.
The path to improved learning goes straight through computing. We can continue to argue over permits for substations and water rights, or we can introduce a new layer with different demands and challenges. The opening statistic—more than 1,000 TWh of electricity used by data centers by 2026—is not just a number for a school trying to keep devices charged and cloud tools functioning. It explains rising costs, community pushback, and why outages affect those with the least resources first. Space data centers are not a magic solution. They are a way to increase capacity, reduce local pressures, and strengthen the services students depend on. If education takes the lead in this first round—through small, measurable pilots, strict privacy and debris regulations, and performance-based contracts—we can transform a lofty goal into a grounded policy achievement. The choice is not between dreams in space and crises on Earth. It is about allowing others to dictate the terms or establishing rules that prioritize public education first. Now is the time to draft those rules.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ascend Horizon. (n.d.). Data centres in space. EESI. (2025, June 25). Data Centers and Water Consumption. Google. (2025, Nov. 4). Meet Project Suncatcher (The Keyword blog). IEA. (2024). Electricity 2024 – Executive summary. IEA. (2025). Energy and AI – Energy demand from AI. Lincoln Institute of Land Policy. (2025, Oct. 17). Data Drain: The Land and Water Impacts of the AI Boom. Microsoft. (2020, Sept. 14). Project Natick underwater datacenter results. Ookla. (2025, June 10). Starlink’s U.S. performance is on the rise. Reuters. (2025, Dec. 10). Blue Origin working on orbital data center technology — WSJ. Scientific American. (2025, Dec.). Space-Based Data Centers Could Power AI with Solar. SpaceX. (n.d.). Smallsat rideshare program pricing. Thales Alenia Space. (2024, June 27). ASCEND feasibility study results on space data centers. The Verge. (2025, Dec. 11). The scramble to launch data centers into space is heating up. Vaibhav Bajpai et al. (2025). Performance Insights into Starlink’s Latency and Packet Loss (preprint). Wall Street Journal. (2025, Dec. 11). Bezos and Musk Race to Bring Data Centers to Space.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.