Incident at a Glance
2 days
key exposed before revocation
Wildcard
SSL key scope: *.myclaw.360.cn
Public
installer, no auth required
0
release checks in pipeline
On March 16, 2026, security researchers found the private SSL key for *.myclaw.360.cn sitting inside a public installer. The installer was for 360 Security Claw, a new AI product from Qihoo 360, China's largest cybersecurity company.
Anyone who downloaded it had the key. With it, you could impersonate any subdomain of the platform, intercept encrypted traffic, or build a login page indistinguishable from the real one. No exploit required. No credentials needed. Just a download.
The key was not hidden. It was sitting inside a bundled archive at a predictable path, namiclaw/components/OpenClaw/openclaw.7z/credentials, extractable with any standard archive tool. The certificate it matched was a wildcard issued two days earlier by WoTrus CA Limited, a Qihoo 360 subsidiary, valid until April 2027.
Qihoo 360 revoked the certificate on March 18 and stated that users were not affected. No explanation was given for how they verified this. No technical post-mortem was published. The company had two days earlier been marketing 360 Security Claw as the safe, responsible alternative to less secure AI tooling.
“A security company shipping a private key in a public installer is not a footnote. It is the finding.”
What Was Actually Exposed
A wildcard SSL private key is not just a credential. It is the cryptographic proof of identity for an entire domain namespace. Possession of the private key for *.myclaw.360.cn meant that any party could stand up a server and present a TLS certificate that browsers and API clients would accept as genuine Qihoo 360 infrastructure. No warning. No visible indication of compromise.
- Impersonate any subdomain of myclaw.360.cn. Create fraudulent servers that TLS clients would accept as legitimate platform infrastructure, with no browser warning.
- Man-in-the-middle encrypted traffic. Intercept and decrypt communications between users and the 360 Security Claw backend, including authentication tokens and session data.
- Forge credential-harvesting login pages. Serve phishing pages with a valid TLS certificate for the legitimate domain, bypassing every browser-level HTTPS warning.
- Hijack agentic AI sessions. 360 Security Claw is an AI agent wrapper. Intercepted sessions could expose or manipulate agent actions, commands, and any data the agent accessed on behalf of the user.
Every user who downloaded the installer before revocation had a copy of the key. The window was approximately two days. Qihoo 360 cannot credibly claim to know how many copies were made or whether any were used, and the company has not attempted to explain that claim.
The WoTrus Complication
The certificate was issued by WoTrus CA Limited, a direct subsidiary of Qihoo 360. WoTrus is the rebranded successor to WoSign CA, which was formally distrusted by every major browser vendor, including Chrome, Firefox, Safari, Edge, and Microsoft Trust Store, in 2016 and 2017.
WoSign was found to have backdated 64 SHA-1 certificates and secretly acquired Israeli certificate authority StartCom without disclosing the acquisition to any browser root program. After rebranding to WoTrus in August 2017, the old WoSign root certificates remained globally distrusted. WoTrus today primarily resells certificates issued under DigiCert and Certum roots rather than operating as a trusted root authority in its own right.
A cybersecurity company shipped a private key for a certificate issued by its own CA subsidiary, a CA that the global browser trust infrastructure already regards as compromised. This is not bad luck. It is a systemic failure of release security at every level of the organisation.
This Is Not New. It Just Keeps Happening.
The Qihoo 360 incident is not an isolated embarrassment. It belongs to a documented, recurring pattern of private keys and credentials shipped inside software packages. The industry has known about this class of failure for over a decade and has not solved it.
Historical Pattern: Private Keys Shipped in Software
Lenovo Superfish (2015)
Lenovo pre-installed adware containing a self-signed root CA certificate with the same private key across all affected machines. Once researchers extracted and published the key, any attacker could generate fraudulent HTTPS certificates trusted by every affected Lenovo computer. The US DHS issued a removal advisory.1
Dell eDellRoot (November 2015)
Dell shipped laptops and desktops with a root CA certificate, eDellRoot, that included the private cryptographic key. The certificate enabled man-in-the-middle attacks on HTTPS traffic and fraudulent certificate issuance. Dell pushed automatic removal within days of public disclosure.2
Rabbit R1 Hardcoded API Keys (June 2024)
Researchers found critical API keys hardcoded in the Rabbit R1 AI device firmware, including ElevenLabs (full-privilege), Azure, Google Maps, and SendGrid. The keys permitted reading all historical R1 responses and bricking devices. Rabbit was notified on May 16 but took no action until public disclosure on June 25.3
PKfail / CVE-2024-8105 (July 2024)
Binarly Research found 813 or more devices across Acer, Dell, HP, Intel, Lenovo, Gigabyte, Fujitsu, and Supermicro shipping with a default AMI test Platform Key for UEFI Secure Boot, a key whose private portion was exposed in a data breach. An attacker with the key could bypass Secure Boot entirely and install firmware-level malware. The first vulnerable firmware dates to May 2012. The supply chain window was twelve years. CVSS 8.2.4
These incidents share a common root cause: nobody scanned the release artifact before it shipped. Not the build pipeline. Not a pre-release check. Not any automated tool. The credentials were present in the package, and the package went out.
How Common Is This?
Shipping secrets in software is not a niche failure. According to GitGuardian's State of Secrets Sprawl 2025 report, 23.8 million secrets were leaked on public GitHub in 2024, a 25 percent increase over 2023. 4.6 percent of public repositories contain at least one exposed secret. 70 percent of secrets leaked in 2022 are still active today.5
The problem extends well beyond source code. A separate GitGuardian analysis of 15 million Docker images found over 100,000 valid secrets embedded in 170,000 images, including more than 7,000 live AWS access keys. 98 percent of those secrets were found in image layers, not configuration files, meaning they were baked in during build and shipped as part of the artifact.6
23.8M
secrets leaked on GitHub in 2024
4.6%
of public repos contain a secret
70%
of leaked secrets are still active
This is not a problem that awareness solves. Developers know they should not ship credentials. They do it anyway, usually not deliberately, but because there is no automated check between the build and the release that would catch it.
How ReleaseGuard Would Have Caught This
ReleaseGuard scans software artifacts at release time, before distribution. It inspects the contents of packages, including nested archives like .7z, .zip, and .tar files, for credential patterns including private keys, certificates, API tokens, and high-entropy strings consistent with cryptographic material.
The Qihoo 360 key was sitting in openclaw.7z/credentials inside the installer, at a path that makes its contents self-evident. A filename of “credentials” inside a bundled archive is precisely the kind of anomaly ReleaseGuard flags. The key material itself, a PEM-encoded RSA private key block, is a standard pattern that any secret scanner recognises in milliseconds.
What ReleaseGuard Detects
- →PEM-encoded private keys (
BEGIN RSA PRIVATE KEY,BEGIN EC PRIVATE KEY, etc.) inside release archives - →X.509 certificates bundled alongside their matching private keys
- →Credential-pattern files in unexpected archive paths (e.g.,
/credentials,/.ssh) - →High-entropy strings consistent with API keys, tokens, and symmetric keys
- →Unexpected files added to the artifact compared to prior releases (diff-based anomaly detection)
If Qihoo 360 had run ReleaseGuard, or any artifact scanner, as part of the release pipeline for 360 Security Claw, the private key would have been flagged before the installer was uploaded to any public server. The certificate would never have been exposed. There would be nothing to revoke.
“This is not a sophisticated attack. It is a missing step in the release process. One automated check. That is the entire gap.”
Why Isn't Everyone Doing This?
This is the question that should make every engineering leader uncomfortable. The technology to catch hardcoded credentials in release artifacts is not new. Tools like TruffleHog, GitGuardian, and ReleaseGuard exist specifically for this purpose. The MITRE CWE taxonomy has catalogued CWE-321 (Use of Hard-coded Cryptographic Key) and CWE-798 (Use of Hard-coded Credentials) for years.7,8 OWASP maps this failure class to A07:2021, Identification and Authentication Failures.9
The problem is not knowledge. The problem is that secret scanning is almost always applied to source code repositories, not to release artifacts. There is a gap between “we scanned the repo” and “we scanned what we actually shipped.” Those two things are not the same. Build processes transform source code. Dependencies get bundled. Assets get included. Temporary credentials get embedded during automation and never removed. A clean repo does not guarantee a clean artifact.
Most security tooling scans code before it is built. Almost nothing scans the artifact after it is built and before it is distributed. That window, between the end of the build and the moment of distribution, is where Qihoo 360's key slipped through. It is where the Lenovo Superfish key slipped through. It is where most of these incidents originate. And it is almost entirely unmonitored.
Why Helixar Is Saying This
Helixar built ReleaseGuard because we operate in the AI infrastructure space and we see, directly, how fast the attack surface is expanding. Every new AI tool, every new agent framework, every new LLM integration wrapper is a release artifact that somebody built quickly and shipped without a complete security review. The incentives are for speed. The tooling has not kept pace.
The LiteLLM supply chain attack, a malicious package pushed directly to PyPI nine days before this article was written, was caught only because the malware crashed its own victim. The Qihoo 360 key was caught only because a developer on a Chinese forum opened the installer archive out of curiosity. Neither of these was caught by any security control in the release pipeline.
We are not calling these incidents because we enjoy criticising other companies. We are calling them because the absence of artifact scanning in release pipelines is a specific, fixable problem, and nobody else in this industry is saying it clearly enough. Artifact scanning at release time should be as standard as unit tests. It is not. We think it should be, and we built the tool to make it easy.
A Word from the Helixar CEO
“Qihoo 360 is not a small startup that cut corners under pressure. They are the world's largest cybersecurity company by user count. If they shipped a private key in a public installer, so will your favourite SaaS tool, your favourite developer framework, and your favourite AI wrapper. This is not an aberration. It is what happens when nobody scans the artifact.
ReleaseGuard is our answer to this. It is free, open source, and it runs in your existing CI/CD pipeline. There is no good reason not to run it. The question after reading this article should not be whether Qihoo 360 was negligent. The question should be: when did you last scan your own release artifact?”
CEO, Helixar.ai
About Helixar
Helixar is an AI-native cybersecurity company building endpoint and API security for the agentic AI era. Its products include the Vigil endpoint agent, the Shield/ATP inbound API inspection layer, and FishBowl, the first OS-native sandbox purpose-built for AI agents. Helixar Labs maintains a suite of open source security tools including ReleaseGuard, Sentinel, and Unpinched.
Helixar is based in Auckland, New Zealand, and is currently in free pilots.
References
- CISA. (2015). Lenovo Superfish Adware Vulnerable to HTTPS Spoofing. Cybersecurity and Infrastructure Security Agency. cisa.gov
- Help Net Security. (2015). Dell shipped computers with root CA cert and private crypto key included. helpnetsecurity.com
- Cybernews. (2024). Critical Rabbit R1 security flaw exposes all historical responses. cybernews.com
- Binarly. (2024). PKfail: Untrusted Platform Keys Undermine Secure Boot on UEFI Ecosystem. Advisory BRLY-2024-005. binarly.io · CVE-2024-8105. cvedetails.com
- GitGuardian. (2025). The State of Secrets Sprawl 2025. blog.gitguardian.com
- GitGuardian. (2024). Fresh From The Docks: Uncovering 100,000 Valid Secrets in DockerHub. blog.gitguardian.com
- MITRE. (2023). CWE-321: Use of Hard-coded Cryptographic Key. Common Weakness Enumeration. cwe.mitre.org
- MITRE. (2023). CWE-798: Use of Hard-coded Credentials. Common Weakness Enumeration. cwe.mitre.org
- OWASP. (2021). A07:2021 — Identification and Authentication Failures. Open Web Application Security Project. owasp.org
- Barrack AI. (2026). Qihoo 360 SSL Key Leak and WoTrus CA Fraud Analysis. blog.barrack.ai
- Neowin. (2026). China's biggest cybersecurity firm accidentally leaked an SSL key in a public installer. neowin.net
- CyberInsider. (2026). Qihoo 360's New AI-Powered Security Tool Exposed SSL Private Key. cyberinsider.com
- TechSpot. (2026). Qihoo 360 accidentally exposed a private SSL key, putting users at risk of MITM attacks. techspot.com
- Mozilla. (2016). WoSign and StartCom: CA action items. Mozilla Bugzilla, Bug 1311824. bugzilla.mozilla.org
- The Register. (2026). China's CERT warns OpenClaw can inflict nasty wounds. theregister.com
