Wikipedia went into read-only mode for two hours yesterday after a Wikimedia Foundation security engineer accidentally imported a malicious script while testing API limits. The script—originally created in 2023 to attack Russian wiki alternatives and dormant on Russian Wikipedia since 2024—spread through MediaWiki:Common.js, a system file that executes JavaScript for all users. It compromised multiple admin accounts including WMFOffice and mass-deleted pages across critical namespaces with the ominous edit summary “Закрываем проект” (Closing the project).
No permanent data loss occurred, and the Wikimedia Foundation restored everything from backups within two hours. However, the incident exposed catastrophic flaws in Wikipedia’s permission model and sparked intense debate on Hacker News (758 points, 249 comments). Developers called it a “career limiting event” for running untrusted code under production admin credentials. This isn’t just Wikipedia drama—it’s a textbook case of how trusted accounts become attack vectors and why dormant malicious code is a ticking time bomb.
How a Dormant Script Became a Catastrophe
The attack chain reads like a spy thriller. In 2023, someone created a malicious script designed to attack Wikireality and Cyclopedia, two Russian-language wiki alternatives. A year later, in 2024, user Ololoshka562 uploaded the script to a page on Russian Wikipedia. There it sat, dormant and undetected, for 18 months.
On March 5, 2026, a WMF security engineer imported the script into his Meta-Wiki account while testing global API limits. Unfortunately, he was logged in as a global interface administrator—one of the most powerful roles on Wikipedia. The moment the script executed, it injected itself into MediaWiki:Common.js, a system file that runs JavaScript in every user’s browser across all Wikimedia projects.
From there, the infection spread rapidly. Multiple admin accounts, including the official WMFOffice account, were compromised. The script began mass-deleting pages across namespaces 0-3 (main articles, talk pages, user pages). Wikipedia’s automated systems detected the anomalous activity and triggered read-only mode, stopping the bleeding before catastrophic damage occurred.
The 18-month dormancy period is the terrifying part. This wasn’t an opportunistic attack—it was patient. The malicious code waited for the right trigger: a trusted staff member with elevated permissions running untrusted scripts in production.
Related: GitHub Issue Title Compromised 4,000 Developer Machines
The Permission Model That Failed
Global interface administrators can edit MediaWiki:Common.js, a JavaScript file that executes in every user’s browser on every page visit across all Wikimedia projects. This creates a perfect attack vector: compromise one admin account with these permissions, and you can inject malicious code that runs globally. Wikipedia limits these permissions to approximately 15 users per major wiki (like English Wikipedia), but one compromise cascades across the entire ecosystem—Wikipedia, Wiktionary, Commons, Wikidata.
The interface administrator role requires two-factor authentication, but 2FA doesn’t prevent what happened here. The engineer was legitimately logged in with valid credentials. He simply made a catastrophic decision: running untrusted user scripts under a production account with global permissions. There was no sandbox, no code review, no validation—just direct execution with the keys to the kingdom.
As one Hacker News commenter put it bluntly: “This is a pretty egregious failure for a staff security engineer. Never execute untrusted code under privileged accounts, especially without sandboxing.” The security community consensus is clear—this was a Security 101 failure at scale.
The broader lesson applies to every platform with powerful admin permissions. One account, one mistake, and the blast radius can extend across your entire infrastructure. Defense in depth means assuming admin accounts will be compromised and designing systems to limit the damage.
What the Script Actually Did
Once activated, the malicious script targeted namespaces 0-3 on Wikimedia projects. These namespaces contain the most critical content: main articles, talk pages, user pages, and user talk pages. The script mass-deleted pages with the Russian edit summary “Закрываем проект”—translating to “Closing the project.” It’s unclear how many pages were affected before Wikipedia’s automated defenses kicked in.
The response was swift. Wikipedia’s systems detected the mass deletions and triggered read-only mode across all Wikimedia projects. The WMF disabled user JavaScript globally to prevent further spread. Within two hours, everything was restored from backups. No permanent data loss occurred, and the WMF confirmed no personal information was breached.
Here’s what’s chilling: the script could have been far worse. Security researchers on Hacker News noted it surprisingly didn’t attempt browser autofill credential harvesting—a “sooo much worse” attack vector that could have stolen passwords from every Wikipedia user and admin. It could have established persistent backdoors, exfiltrated sensitive data, or planted time-delayed malware. The fact it “only” deleted pages is either merciful restraint or incompetent malware design.
The quick two-hour recovery proves Wikipedia’s backup systems work. But it also shows how close they came to disaster. If the script had been more sophisticated, or if the attack had gone undetected longer, the damage could have been irreversible.
Supply Chain Security Lessons for Developers
This incident is a textbook supply chain security failure. OWASP’s Top 10:2025 lists “Software Supply Chain Failures” as the #3 threat because malicious code can remain dormant for years, evading testing strategies by blending with normal functionality. Wikipedia’s 18-month dormancy period proves the point.
Research from Black Duck emphasizes that malicious code might establish persistent access or wait for specific trigger conditions before activation. In Wikipedia’s case, the trigger was execution by a trusted admin account. The code didn’t need to exploit a vulnerability—it just needed patience.
Every organization using community-contributed code, open-source libraries, or third-party scripts faces this risk. Age doesn’t mean safety—it can mean patience by attackers. Automated malware scanning, permission limits, and sandboxed testing aren’t optional. They’re essential.
For developers, the takeaways are clear. Never run untrusted code under production credentials. Isolate test environments from production. Validate code sources before execution. Assume even “old” or “trusted” code can be compromised. And design systems to limit blast radius when—not if—privileged accounts are compromised.
Key Takeaways
- Never run untrusted code under production admin credentials—sandbox everything, isolate test environments, and validate sources before execution
- Dormant malicious code can wait years for the right trigger—the 18-month dormancy period in Wikipedia’s case shows patient, sophisticated attack planning
- Powerful admin permissions need isolation and validation—Wikipedia’s global interface administrators can inject JavaScript that executes for all users, creating cascading failure risk
- Supply chain security applies to community platforms too—even Wikipedia isn’t immune to dormant threats in user-contributed scripts and open-source code
- Quick recovery matters—Wikipedia’s two-hour restoration from backups prevented catastrophic damage, proving backup systems are essential defense
The Hacker News discussion continues at 758 points with ongoing debate about whether Wikipedia’s architecture is fundamentally broken or if this was isolated human error. Either way, it’s a cautionary tale for every developer managing privileged accounts and community-contributed code.

