Helixar.ai
Research, Articles & Announcements
Research, threat intelligence, and announcements from the Helixar team.
Latest
20 articles
Helixar Accepted into NVIDIA Inception as Agentic AI Security Moves from Research to Category
From 360-degree detection and threat research to HDP, HDP-P, and a live Hugging Face demo, Helixar is building across platform, protocol, and research at once.
Helixar has been accepted into NVIDIA Inception as it expands its agentic AI security platform, published research, HDP protocol work, HDP-P physical AI security efforts, and live demonstrations. Also supported by Google for Startups, the company is building the infrastructure layer for a market that is only just starting to name itself.
Anthropic's Claude Mythos Preview Just Changed the Security Equation. We Built Helixar for Exactly This.
The model autonomously chains zero-day exploits across every major OS and browser with no human guidance. Helixar published HDP and ReleaseGuard before this moment, and built behavioral detection for the threat no signature database can stop.
Anthropic's Claude Mythos Preview can autonomously find zero-day vulnerabilities in every major OS and browser, write working exploits, and chain them to escape browser sandboxes, no human guidance after the initial prompt. Helixar published the HDP open standard and ReleaseGuard before this threat materialised. Here is why the architecture we built is the right answer to what Anthropic just announced.
When AI Agents Control Physical Systems, a Prompt Injection Becomes a Physical Event
Gemma 4 runs on a Jetson Nano and does native function calling. HDP-P is the open protocol putting a cryptographic gate between the model and the actuator layer.
Gemma 4 runs on edge hardware and does structured function calling. When that function calling is wired to a physical actuator, a bad model output is no longer a software problem. HDP-P is the open protocol that puts a cryptographic authorization layer between the model and the physical world, and the live Hugging Face demo shows it blocking a malicious command injection in real time.
Anthropic Leaked 512,000 Lines of Claude Code via a Misconfigured npm Package
A single debug file shipped to the public registry. The entire source followed. This was the second time.
On March 31, 2026, a 59.8 MB JavaScript source map in @anthropic-ai/claude-code v2.1.88 pointed to a publicly accessible Cloudflare R2 bucket containing the full Claude Code source: 1,906 TypeScript files, 512,000 lines, every internal tool and slash command, and a stealth system designed to prevent exactly this kind of leak. A single misconfigured .npmignore. The second time this happened.
More articles forthcoming. Helixar research is published as findings are validated.