Skip to main content
AI Malware: The Code That Rewrites Itself Is Already Here
· 7 min read

AI Malware: The Code That Rewrites Itself Is Already Here

Google's Threat Intelligence Group has identified PROMPTFLUX, an experimental malware strain that uses large language models to rewrite its own attack code in real time. For European organisations accelerating AI adoption, the discovery is a sharp reminder that the threat landscape is mutating faster than most security teams are prepared for.

Google's Threat Intelligence Group has confirmed the existence of PROMPTFLUX, an experimental malware strain capable of rewriting its own code by communicating directly with large language models. This is not a theoretical risk or a researcher's thought experiment. It is working, documented malware that signals a fundamental break from the static, signature-based threats that most European security teams are trained to fight.

[[KEY-TAKEAWAYS:PROMPTFLUX uses AI models to generate fresh attack code on demand, bypassing signature-based detection|The malware exploits prompt injection, turning legitimate AI tools into unwitting attack infrastructure|Google's own AI agent Big Sleep is already being deployed to hunt for vulnerabilities in response|Underground markets are commoditising AI attack tools, lowering the barrier for less-skilled threat actors|European organisations in finance, healthcare and critical infrastructure face heightened exposure as AI adoption accelerates]]

Advertisement

How PROMPTFLUX Actually Works

PROMPTFLUX takes a "just-in-time" approach to malicious activity. Rather than embedding static attack routines into its payload, it sends carefully crafted prompts to AI systems, including Google's own Gemini, and uses the responses to generate malicious scripts dynamically. This technique, known as prompt injection, exploits the very capabilities that make AI models useful for legitimate development work.

The practical consequences for defenders are severe. Traditional cybersecurity relies on recognising patterns and code signatures. PROMPTFLUX breaks that model by presenting a genuinely novel code variant on each deployment. It can obfuscate its own structure to evade antivirus scanning, generate new attack functions on demand, and adapt its behaviour in response to the defences it encounters. Catching it is, as one researcher put it, like trying to catch smoke.

Editorial photograph taken inside a modern European cybersecurity operations centre, showing analysts monitoring large wall-mounted screens displaying network traffic visualisations and threat dashboa

Why This Matters for Europe Specifically

European organisations are not bystanders in this shift. The continent is mid-stride through a wave of AI adoption spanning financial services, healthcare, public administration and critical infrastructure. That same digital transformation creates a larger and more complex attack surface.

Ciaran Martin, former chief executive of the UK's National Cyber Security Centre and now a professor at the Blavatnik School of Government in Oxford, has argued consistently that the gap between attacker and defender capabilities is the central problem in modern cybersecurity. Self-modifying, AI-generated malware widens that gap dramatically by automating the innovation cycle on the attacker's side.

The European Union Agency for Cybersecurity, ENISA, flagged AI-assisted attacks as a priority threat vector in its 2024 Threat Landscape report. The agency pointed to the growing use of generative AI to accelerate reconnaissance, craft more convincing phishing lures, and automate lateral movement inside compromised networks. PROMPTFLUX represents the logical next step: AI not merely assisting an attack, but generating the attack itself.

The Underground Market Angle

Google's research also surfaces connections to financially motivated threat actors, indicating that AI attack capabilities are moving beyond state-sponsored operations into the broader criminal ecosystem. An underground marketplace for illicit AI tools is forming, and it is lowering the barrier to entry for less technically skilled actors.

CrowdStrike's 2026 Global Threat Report puts the scale of the prompt-injection problem in stark terms: adversaries exploited legitimate generative AI tools at more than 90 organisations by injecting malicious prompts to generate commands for stealing credentials and cryptocurrency. That figure covers incidents already logged; it does not account for campaigns that went undetected.

State-level actors add another layer of concern. Groups linked to North Korea, Iran and China have all been documented experimenting with AI to sharpen their operations, according to Google's own threat intelligence assessments. For European governments and defence contractors, that is not an abstract geopolitical observation. It is a procurement and incident-response planning reality.

What Makes AI Malware Harder to Fight

The characteristics that make PROMPTFLUX and its successors so difficult to counter include the following:

  • Dynamic code generation that produces a unique variant for each deployment, defeating signature libraries instantly.
  • Real-time adaptation to active security measures, meaning the malware learns from failed intrusion attempts within a single campaign.
  • Exploitation of legitimate AI services as the delivery infrastructure, which avoids traditional malware distribution red flags.
  • Potential for autonomous operation without continuous human oversight from the attacker.
  • Obfuscation capabilities baked in, making static analysis tools significantly less reliable.

Behavioural detection systems offer more promise than signature scanning against these threats, but they too must be retrained continuously as evasion techniques evolve. The security industry's standard playbook, written for a world of fixed code and predictable attack patterns, needs a substantial rewrite.

The Current State of PROMPTFLUX

There is a degree of reassurance in Google's findings, though it should not be overstated. PROMPTFLUX appears to still be in active development. Google's analysts found commented-out features and API rate-limiting mechanisms in the code, suggesting ongoing testing rather than live deployment at scale. This discovery is a warning, not a post-mortem.

The comparison between traditional and AI-powered malware capabilities illustrates how much ground defenders need to recover:

  • Traditional malware carries static code signatures; AI-powered malware generates code dynamically on each run.
  • Traditional threats behave predictably and are catalogued over time; AI threats adapt their response to the environment they encounter.
  • Traditional attack vectors are fixed at compile time; context-aware AI targeting selects vectors based on real-time reconnaissance.
  • Traditional malware requires manual updates from its operators; self-modifying malware updates itself.

The Defender's Counter-Move

Google has not simply identified the threat and moved on. The company has deployed Big Sleep, an AI agent designed specifically to identify security vulnerabilities before attackers do. It is an early example of the AI-versus-AI dynamic that will define enterprise security for the next decade.

For European security teams, the practical response involves a shift in investment priorities. Max Heinemeyer, chief product officer at Darktrace, a Cambridge-based AI security company with deep European deployment, has argued publicly that autonomous response capabilities, systems that can contain an evolving threat faster than any human analyst, are no longer optional for organisations running complex AI workloads. The speed at which PROMPTFLUX-style malware can mutate makes human-in-the-loop response times inadequate.

Concretely, organisations should be moving in the following directions:

  1. Invest in AI-powered behavioural monitoring that detects anomalous activity rather than matching against known signatures.
  2. Implement robust input validation and content filtering for any internal AI tools to reduce prompt-injection exposure.
  3. Develop incident-response playbooks specifically for rapidly evolving threats that traditional tools are likely to miss on first encounter.
  4. Audit third-party AI integrations for prompt-injection attack surfaces, particularly in customer-facing or finance-adjacent systems.

The EU's AI Act adds a compliance dimension that is worth acknowledging. Organisations deploying high-risk AI systems are required to maintain logging and human oversight mechanisms; those same controls will also be useful forensically when AI-assisted attacks do breach perimeters. Regulatory compliance and security hardening, for once, point in the same direction.

A Threshold Moment

PROMPTFLUX is still experimental. But the underlying technique, using a publicly accessible AI model as a real-time code factory for malicious payloads, is not complex to replicate. The barrier to building a production-ready version is falling as model capabilities improve and API access becomes cheaper. Europe's cybersecurity community has a narrow window to get ahead of this. It should use it.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 6 terms
generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

embedding

Converting text or images into numbers that capture their meaning, so AI can compare them.

API

Application Programming Interface, a way for software to talk to other software.

AI-powered

Uses artificial intelligence as part of its functionality.

at scale

Applied broadly, to a large number of users or use cases.

ecosystem

A network of interconnected products, services, and stakeholders.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment