The Dark Side of AI in Cybersecurity — AI-Generated Malware

May 15, 2024
8 minutes
... views

Rem Dudas — AI-Generated Malware

00:00 00:00

“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42, with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity.

In a thought-provoking interview on the Threat Vector podcast, Palo Alto Networks researchers Bar Matalon and Rem Dudas shed light on their groundbreaking research into AI-generated malware and shared their predictions for the future of AI in cybersecurity.

As artificial intelligence (AI) continues to evolve at an unprecedented pace, its impact on the cybersecurity landscape is becoming increasingly apparent. While AI has the potential to revolutionize threat detection and defense strategies, it can also be exploited by malicious actors to create more sophisticated and evasive threats. In a thought-provoking interview on the Threat Vector podcast, Palo Alto Networks researchers, Bar Matalon and Rem Dudas, shed light on their groundbreaking research into AI-generated malware and their predictions for the future of AI in cybersecurity.

Unraveling the Complexity of AI-Generated Malware

When asked about the possibility of AI generating malware, Dudas responded unequivocally, stating, "The answer is yes. And there is a bit of a longer version for that answer. It's a lot more complex than it seems at first." The researchers embarked on a journey to generate malware samples based on MITRE ATT&CK techniques, and while the initial results were lackluster, they persevered and eventually generated samples that were both sophisticated and alarming. Dudas explains their process further:

“The main stage after the basic tinkering with the AI models was trying to generate malware samples that perform specific tasks based on MITRE techniques. If you're familiar with those, for example, we would like to generate a sample that does credential gathering from Chromium browsers. So, we tried generating those, and for each technique that we found interesting, we tried generating a specific sample. We did that for different operating systems – for Windows, macOS and Linux. And, we tested all of those samples against our product [Cortex], as well. That was the first stage I'd say.”

Impersonation and Psychological Warfare

One of the most disconcerting discoveries made by the researchers was the ability of AI models to impersonate specific threat actors and malware families with uncanny accuracy. By providing the AI with open-source materials, such as articles analyzing malware campaigns, the researchers were able to generate malware that closely resembled known threats, like the Bumblebee web shell.

Dudas predicts that "Impersonation and psychological warfare will be a big thing in the coming years," He cautions:

"...if you've tried asking generative AI to write a letter like Jane Austen would, the results are scary. Similarly, threat actors can impersonate others and plant false flags for researchers to uncover. I mean, that's purely speculative at this point, but imagine a nation actor with ill intent using psychological warfare, mimicking another nation's arsenal, kit or malware and planting false flags, trying to make it look as if another country or another threat actor made a specific attack. It opens the door for a lot of nasty business and makes attribution and detection pretty difficult for the defending side.”

The Perils of Polymorphic Malware

Another alarming trend highlighted by the researchers is the potential for AI to generate a vast array of malware variants with similar functionalities and overwhelming security professionals. Dudas warns, "Polymorphic malware – giving LLMs snippets of malware source code – could lead to a staggering amount of slightly different samples with similar functionalities that will overwhelm researchers."

This proliferation of polymorphic malware, combined with the increasing sophistication of AI-generated threats, could render traditional signature-based detection methods obsolete. As Dudas puts it, "Signature-based engines are dying. Detecting malware based on specific strings or other identifiers is already too wide a net. With the addition of polymorphy and automatically generated malware, this net could be torn completely."

Key characteristics of polymorphic malware include:

  • Mutation – The malware automatically modifies its code each time it replicates or infects a new system, making it difficult for signature-based detection methods to identify it.
  • Encryption – Polymorphic malware often uses encryption to hide its payload, further complicating detection and analysis.
  • Obfuscation – The malware employs various techniques to conceal its true functionality, such as dead code insertion, register renaming and instruction substitution.
  • Functionality Preservation – Despite the constant changes in its code, polymorphic malware retains its original malicious functionality.
  • Harder to Detect and Analyze – Due to its changing nature, polymorphic malware is more challenging for antivirus software to detect and for security researchers to analyze and understand.

The Evolution of Phishing and Scamming

Dudas also foresees a significant transformation in the area of phishing and scamming, due to the advanced natural language capabilities of large language models (LLMs). He explains:

"Since LLMs usually sound so natural to end users, I'd say the field of phishing and scamming will undergo the biggest alteration. For example, weird grammar, a sense of urgency and pressure, as well as spelling errors are the easiest ways to recognize a phishing email. With LLMs, these telltale signs are a thing of the past. You could generate an entire convincing campaign from scratch in no time with a basic understanding of what makes people tick, even if you do not speak the language."

AI algorithms can analyze vast amounts of publicly available data to create highly personalized phishing emails, tailored to specific individuals, increasing the likelihood of the recipient falling for the scam. AI-powered natural language generation (NLG) can create convincing and contextually relevant phishing emails that mimic human writing styles, complete with proper grammar and tone, making it harder for recipients to identify them as fraudulent.

Likewise, AI-driven chatbots and voice synthesis can be used to create realistic conversational interactions, tricking victims into divulging sensitive information or performing actions that benefit the scammer. Deepfakes, generated by AI, can produce fake audio and video content, such as impersonating a company executive or creating a false sense of urgency to manipulate victims into complying with the scammer's demands. AI can also analyze data on user behavior, such as when they are most likely to open and respond to emails, allowing scammers to optimize the timing and targeting of their phishing campaigns for maximum impact.

Fortifying Defenses Against AI-Generated Malware

To combat the rising threat of AI-generated malware, Bar Matalon advises investing in cutting-edge tools that employ dynamic detections and behavioral rules, such as Palo Alto Networks Cortex XDR or Cortex XSIAM. He emphasizes, "I think one of the best practices for organizations is to invest in advanced tools that leverage dynamic detections and behavior rules to detect all these new threats and stop them."

These AI-powered systems can identify and neutralize novel threats by analyzing program behaviors and connections in real-time. Matalon predicts, "Security tools will increasingly leverage AI to dynamically identify new threats and stop them," highlighting the critical role AI will play in bolstering cybersecurity defenses.

The Shifting Landscape of Cybersecurity

As AI becomes more ubiquitous, the cybersecurity landscape is poised for significant disruption. Matalon cautions, "AI will help people with less technical knowledge become cyberthreats, lowering the barriers for more threat actors to join." He further predicts, "AI will be used to create lots of new types of malware, flooding the digital world with different threats," and "...threat actors will use AI to automate their work and be much more effective." This will lead to an increase in the volume and sophistication of attacks. Moreover, Matalon warns, "It would be much harder for researchers to attribute an attack to the threat actor behind it, since it would be possible to mimic another actor's tools and TTPs."

The Promise of AI in Threat Detection

Despite the daunting challenges posed by AI-generated malware, Dudas believes that AI will also play a pivotal role in enhancing threat detection capabilities. He envisions a future where "'Cybersecurity researchers' models that have been trained on content and material related to threat research...will be able to perform the same analysis tasks as researchers and will yield quality results in much shorter time frames."

This application of AI could potentially level the playing field and empower cybersecurity professionals to stay ahead of the security curve.

The insightful research conducted by Bar Matalon and Rem Dudas serves as a clarion call for the cybersecurity community. As we navigate the uncharted waters of an AI-driven threat landscape, it is imperative that we remain vigilant, adaptable and proactive in our approach to defense. By harnessing the power of AI in our own security tools and strategies, we can fortify our defenses and stay one step ahead of the malicious actors seeking to exploit this transformative technology. As Matalon aptly puts it, "Maybe that's the way we'll do that in the future – that the best solution for a bad person with an AI model is the good person with an AI model. Right?"

Ready for next steps to adopt GenAI securely and confidently? Get your Unit 42 AI Security Assessment today!


Subscribe to the Blog!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.