
Earlier this month, Google released a report claiming to have observed the first AI-augmented malware being used in real-world attacks. The discovery marks a significant advancement in the implementation of generative AI by cybercriminals. While security researchers have been aware of the adversarial use of AI for some time, this would be the first-known instance of malware directly incorporating AI into its functionality during execution.
What’s Notable and Unique
- While still in its nascency, an example of this malware, PROMPTSTEAL, was deployed by Russia in an operational environment against Ukraine, indicating the utility of AI capabilities and a desire to leverage them immediately.
- The development of AI-enabled malware is not unique to nation-state actors. Some of the malware reported by Google was developed with clear financial motivations in mind, and several AI-augmented tools are being advertised on cybercriminal forums.
Analyst Comments
Polymorphic malware, which is malware capable of changing its code to evade detection, is not a novel concept. The first examples of polymorphic malware date back to the early 1990s. However, the mechanisms by which those examples achieved their objectives were typically modifications in obfuscation or encryption routines, rather than the introduction of new code and functionality. Recently, however, the relative sophistication in threat actors’ use of AI has increased drastically throughout 2025, evolving from simple script generation to incorporation into every phase of the attack lifecycle. At the present pace, AI-enabled malware will not likely remain in this fledgling state for long.


