All articles
Threat IntelligenceMarch 2026·11 min read

The Next War Won't Have a Front Line

AI agents are doing to cybersecurity what drones did to armoured warfare. The attack surface has widened faster than any defence has adapted — and almost anyone can now run the offensive.

AI agent drone swarm overwhelming legacy tank defence — the economics of agentic cyber warfare visualised — Helixar threat intelligence

Illustration generated using Google Gemini. AI-generated image — no copyright infringement intended.

The Numbers

$10.5T

est. global cybercrime cost by 2025

72 hrs

median breach detection time

~$50/mo

to run a basic agentic attack loop

1 node

compromised AI setup = new attack surface

In 2022, something changed on the battlefield. Drones that cost a few hundred dollars began destroying tanks that cost millions. Not because the drone was more powerful — it was not. Because it changed the economics and the geometry of the fight. One operator. Unlimited range. Expendable. Scalable. The tank, optimised for a symmetrical war that no longer existed, became a liability. The same shift is happening right now in cybersecurity. AI agents are the drones. Legacy security architecture is the tank. And the war is already underway.

The Drone Moment for Cyber Warfare

The drone analogy holds at every level. Before FPV drones became weapons, lethality was proportional to budget. You needed expensive platforms, trained operators, and logistics infrastructure to project force. The barrier to capability was high, and that barrier was load-bearing. It constrained who could threaten whom.

Cheap drones collapsed that barrier. A $400 drone carrying a $10 grenade could destroy a $4 million armoured vehicle. The economics of offence and defence, which had been roughly stable for decades, inverted overnight. Defence establishments that had spent decades and billions optimising for a world where that inversion did not exist found themselves with fleets of expensive assets that had become, in the new geometry, liabilities.

“Just as drones made it possible for a small team to destroy a billion-dollar asset, AI agents make it possible for a low-resourced threat actor to compromise a billion-dollar enterprise. The economics have collapsed. The barrier to sophisticated attack has never been lower.”

The analogy holds at every level. Drones democratised lethal capability — states and non-state actors alike gained access to offensive power that was previously only available to well-funded militaries. AI agents are democratising offensive cyber capability in exactly the same way. The asymmetry between attacker and defender has never been steeper.

What Changed and When: The Collapse of the Skill Floor

Until recently, a sophisticated cyberattack required expertise: skilled operators who could map networks, identify vulnerabilities, pivot across systems, and exfiltrate data without triggering alerts. That expertise was scarce and expensive. Nation-state actors had it. Advanced criminal organisations had it. Almost nobody else did.

The arrival of capable, general-purpose AI agents changed the supply equation entirely. The skills are now available to anyone who can afford API access — which is to say, almost anyone. For approximately $50 per month in compute costs, a threat actor with no technical background can run an agentic attack loop that autonomously maps API surfaces, probes for injection vulnerabilities, chains access escalations, and exfiltrates data. The limiting factor shifted from human expertise to intent.

The Capability Shift

Before

  • Attack required human expertise — skilled operators were scarce
  • Attack rate was bounded by human supply
  • Multi-stage attacks took days or weeks of human planning
  • Detection had time to catch up — human attackers needed sleep
  • Nation-states and serious criminal actors only

Now

  • Agent executes the same steps autonomously, 24/7
  • Agents scale horizontally — hundreds of parallel attack threads
  • Agentic attack loops run continuously, adapting in real time
  • Agents don't sleep — enumeration, probing, escalation never pause
  • Any individual with a laptop and an API subscription

The Three New Attack Surfaces

The attack surface is not just widening. It is changing shape in ways that existing security architecture was not designed to cover. Three surfaces in particular represent the frontier of agentic threat exposure.

1. Military and Defence Infrastructure

Autonomous agents targeting command systems, logistics networks, satellite uplinks, and supply chain tooling represent a new class of threat to defence infrastructure. Not necessarily to destroy — to confuse, delay, or corrupt at a moment of operational pressure. The same agentic enumeration technique used against McKinsey's Lilli platform — systematic API surface mapping, unauthenticated endpoint exploitation, IDOR chaining — applies unchanged to military-adjacent infrastructure with equivalently poor AI platform security hygiene.

The defining characteristic of this threat is not sophistication. It is persistence. A human attacker can be tired, distracted, or deterred. An agent swarm probing a defence contractor's API network runs continuously at machine speed, across all discovered endpoints simultaneously, adapting based on response patterns. The operational tempo is inhuman because the operator is not human.

2. Critical National Infrastructure

Power grids, water treatment facilities, financial clearing systems, and hospital networks are all exposed through a common structural weakness: poorly secured industrial control interfaces, vendor remote-access portals, and AI-connected monitoring tools added to legacy infrastructure over the past decade.

Agents autonomously scanning SCADA system interfaces exposed through vendor remote-access portals represent a credible current threat. A single unpatched endpoint in an energy company's AI operations platform becomes the entry point to grid management infrastructure — not through a sophisticated zero-day exploit, but through the same SQL injection and IDOR chaining pattern that is publicly documented and freely available in agentic attack frameworks.

3. The Compromised AI Node: The Surface Nobody Is Talking About

The third surface deserves more attention than it is getting. Most security discourse focuses on attacks against enterprises and infrastructure. The compromised individual AI setup is a different and less discussed threat.

An individual running an AI agent with local filesystem access, a connected development environment, or an agentic workflow that touches internal network resources is operating attack-surface-equivalent infrastructure. If that environment is compromised — through a poisoned model prompt, a malicious tool integration, a compromised API key, or a vulnerable dependency in an open-source agent framework — the attacker inherits the agent's access.

“You don't need to hack the enterprise. You need to compromise the employee's AI setup. The agent has the access. The attacker just needs to redirect it.”

This is not a theoretical edge case. Model Context Protocol (MCP) integrations, agentic coding tools, and local LLM setups are proliferating faster than security guidance is being written for them. A compromised AI node in a mid-level employee's home office can provide more access to sensitive resources than a sophisticated external attacker could achieve through months of traditional penetration. And it operates silently, through sessions the user considers completely normal.

Automated Swarms: The Drone Fleet Equivalent

Drone warfare changed not just cost and range but scale. A single operator can direct dozens of units simultaneously. The limiting factor shifted from pilot supply to coordination and communications bandwidth.

Agentic attack swarms operate on the same principle. A threat actor running an agent loop does not run one agent at a time. They run populations of agents in parallel — each probing a different target, a different endpoint, a different vulnerability class — with results feeding back into a shared intelligence layer that escalates promising threads. The drone operator analogy is exact: one threat actor, hundreds of simultaneous attack vectors.

Asymmetry

A human attacker costs $200,000+ per year in salary and can probe roughly one target at a time. An agentic attack swarm costs under $100 per day and runs against hundreds of targets simultaneously. The economics do not favour the defender under any existing model — and the gap is widening as models become cheaper and more capable.

The implications for detection are severe. Traditional security monitoring was designed around the concept of an attacker operating sequentially. Signature-based detection, rate limiting, and anomaly thresholds were calibrated for human attackers. A swarm of agents probing at inhuman speeds and breadth — each individual thread appearing low-volume and benign — defeats these controls at a design level, not an implementation level. You cannot tune your way out of this with better thresholds.

Why Existing Defences Are Structurally Mismatched

The gap between attacker capability and defender architecture is not a resourcing problem. It is a structural mismatch. Security tools designed for a previous era of threat cannot be patched to address this one — just as no amount of additional armour on a tank addresses the threat of a $400 drone.

Signature Dependency

The dominant model in security tooling is signature-based detection: identify known bad patterns, block them, update the list. This model works when the attack universe is stable and enumerable. Against agentic threats that adapt in real time, generate novel attack paths, and operate outside documented exploit patterns, it fails at the foundation. You cannot write a signature for behaviour you have not seen before. AI-generated attack paths are, by definition, not in any signature database.

Human-Speed Incident Response

Security operations centres were designed around human-speed attacks. An analyst has time to review an alert, investigate context, and escalate. An agentic attacker operating at machine speed can complete the entire kill chain — reconnaissance, access, lateral movement, exfiltration — in the time it takes an on-call analyst to open a ticket. The median breach detection time of 72 hours, documented in IBM's 2024 Cost of a Data Breach Report, was already far too slow for human-speed attacks. Against agentic operations that complete in minutes, it is not even in the same order of magnitude.

The Perimeter Assumption

Perimeter security assumes a clear boundary between inside and outside. Agentic threats, particularly those operating through compromised AI nodes or legitimate API sessions, operate inside the perimeter by design. They arrive through trusted channels — a developer's AI tool, a vendor API integration, an internal LLM session — and the perimeter has already granted them trust. There is no perimeter to defend when the attacker is already inside a trusted session.

No Coverage for the Prompt Layer

The AI-specific attack surface — system prompts, model configurations, RAG knowledge bases, agentic session contexts — does not exist in the threat model of most security tooling. It is not monitored, not access-controlled, and not included in incident response playbooks. An attacker who rewrites a system prompt has made a change that no conventional security tool will flag, and whose effects may not be visible for weeks. The AI simply behaves differently, and no alert fires.

“The attack surface has widened. The defender's architecture has not. The gap between what attackers can do and what defenders can see is the defining security problem of the next decade.”

Helixar's Approach: Built for the Agentic Threat Era

Helixar was not designed as an incremental improvement on existing security tools. It was designed from the ground up for a threat landscape in which the attacker is agentic, the attack surface includes the AI layer, and the kill chain can complete in minutes rather than days.

Helixar is in active pilot testing. The following describes the platform's intended architecture and theoretical detection posture. Specific capabilities are subject to change as the product matures.

Three principles define the approach:

  • Behavioural detection over signatures: Agentic threats generate novel attack paths that no signature library anticipates. Helixar is designed to detect the trajectory of threat behaviour — the sequence and intent of actions across an environment — rather than matching individual events to known patterns. A threat is visible in how it moves, not only in what technique it uses.
  • Session-level intent scoring: Individual API requests can appear benign. The attack is visible at the session level — in the pattern of requests over time, the breadth of endpoint coverage, the systematic mutation of parameters. Helixar is designed to classify session intent, not just inspect individual calls.
  • Cross-layer correlation: Agentic kill chains move across layers — from API to endpoint to configuration store. Treating each layer's signals independently produces isolated alerts that look low-severity. Correlating them produces a unified kill-chain view that reveals the attack in its full context. This is not possible without architecture designed for it.
Threat SurfaceTheoretical Detection SignalWhy This Matters
Agentic attack swarmsSession-level behavioural scoring flags enumeration pattern — high endpoint coverage, systematic parameter mutation, abnormal request cadence — invisible per-request but statistically clear at the session level.Swarms are designed to evade per-request detection. Session-level intent classification is the appropriate detection layer.
Military / critical infrastructure API probingIterative probe pattern generates high-risk session classification before bulk data extraction begins. Unauthenticated write endpoints probed systematically are among the highest-signal threat indicators.Infrastructure attacks follow the same behavioural kill chain as enterprise attacks. Technique varies; trajectory does not.
Compromised AI nodeEndpoint-level behavioural signals from an agentic session deviating from normal patterns — unusual process chains, anomalous outbound connections, filesystem access outside expected scope — generate early-stage IOB signals.Compromised AI nodes are trusted by design. Perimeter tools have no view. Endpoint behavioural detection is the only layer with visibility.
Prompt layer / AI configuration tamperingWrite operations targeting AI configuration, system prompt stores, or model settings are a high-signal event with no legitimate pattern in normal user sessions.No conventional security tool monitors the prompt layer. Detection here requires architecture that treats the AI layer as a security boundary.

The Gap Nobody Is Closing Fast Enough

There is no comfortable way to frame the current situation. The attack surface is expanding faster than the security industry is building defences for it. The tools that will be needed do not yet exist at production scale. And the threat actors — state-backed, criminal, and individual — are already operating with the new capability.

The specific vulnerabilities are solvable. Unauthenticated endpoints can be locked. Parameterisation can be enforced. Prompt stores can be access-controlled. These are engineering problems with engineering solutions, and enterprises that take them seriously can dramatically reduce their exposure.

The harder problem is the detection layer. You cannot firewall an agent swarm the same way you firewall a known IP range. You cannot write a signature for an attack that adapts in real time. And you cannot respond at human speed to an attack that completes in minutes.

The Core Problem

The security industry is largely still building the equivalent of better tank armour. The threat has already moved to drones. The architecture needs to change — not the implementation. Three incremental improvements to a signature-based SIEM do not produce behavioural detection of agentic threats. They produce a faster SIEM.

Helixar's thesis is that the answer to an agentic threat is an agentic defence — a system that operates at the speed and scale of the attack, detects behavioural trajectories rather than known signatures, and treats the AI layer itself as a security boundary that needs to be monitored and protected.

Three facts about the trajectory of this problem are not in dispute:

  • The attack surface is not going to shrink. Every new AI deployment, every new agentic workflow, every new MCP integration adds surface area that existing security tooling was not designed to cover.
  • The cost of offensive capability is not going to rise. If anything, models will get cheaper and more capable, and the $50/month attack loop will become a $5/month attack loop within the current decade.
  • The window for getting ahead of this is narrowing. Detection architecture takes time to build, validate, and deploy. The time to start is before the incident, not after an autonomous agent has already spent two hours in your production database.

References

  1. Cybersecurity Ventures. (2023). Cybercrime to Cost the World $10.5 Trillion Annually by 2025. Cybersecurity Ventures. cybersecurityventures.com
  2. IBM Security. (2024). Cost of a Data Breach Report 2024. IBM Corporation. ibm.com
  3. CrowdStrike. (2026). 2026 CrowdStrike Global Threat Report. CrowdStrike Inc. crowdstrike.com
  4. The Hacker News. (2026). Google: State-Backed Hackers Used Gemini AI for Cyberattack Reconnaissance. The Hacker News, February 2026. thehackernews.com
  5. CISA. (2024). Guidelines for Secure AI System Development. Cybersecurity and Infrastructure Security Agency. cisa.gov
  6. WIRED. (2023). The Hard Lessons of Ukraine's Killer Drone Program. WIRED Magazine. wired.com
  7. Microsoft & OpenAI. (2024). Disrupting malicious uses of AI by state-affiliated threat actors. Microsoft Security Blog, February 2024. microsoft.com
  8. Anthropic. (2025). Transparency Report: Disrupted AI-Enabled Intrusion Campaigns. Anthropic, November 2025. anthropic.com
  9. NIST. (2023). AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. airc.nist.gov

The drone moment for cyber has arrived. Is your detection architecture built for it?

Helixar is entering paid pilots. Talk to us about your real agentic threat exposure.

Book a Walkthrough