AI in Security — Ready for Prime Time

Jan 17, 2024
6 minutes
329 views

Yoni Allon – Predicting How Attackers Will Use AI

00:00 00:00

“AI’s Impact in Cybersecurity” is a regular blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42 with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity. Yoni Allon, vice president research, chats with David Moulton, director of thought leadership, shares insights on the evolving landscape of AI's role in the security operations center (SOC) and the opportunities and challenges it brings.

The fusion of artificial intelligence (AI) with cybersecurity has revolutionized the approach to safeguarding our digital lives in ways we are only just beginning to take advantage of en masse – across most if not all industries, from the novice to the well-seasoned pro. AI is no longer a predictable trope limited to slick Sci-Fi movies and Hugo-award-winning novels.

It’s finally ready for prime time and most folks seem ready to dip their proverbial toes in, if only to see what the fuss is all about. Yet, at Palo Alto Networks, earlier nascent iterations of AI have been in use for well over 10 years and continue to adapt and evolve during this neo-Renaissance. The future looks bright, and we’re here for it.

Adapting to Evolving Threats

As defenders, we get to utilize the cool tools yet, conversely, so do our adversaries. AI fighting AI is on the horizon, with augmented capabilities for both attackers and defenders. We’re just now getting a glimpse of future scenarios as the technology matures. Accordingly, Yoni highlights a significant shift in attacker tactics foreseeing a surge in the use of generative AI by malevolent entities.

This advancement enables the creation of intricate deep fakes and convincing phishing attempts, demanding enhanced vigilance within organizations. Advancements in voice modeling and digital capabilities are creating synthetic versions of authentic humans in ways that challenge the most critical eye and ear. As such, security practitioners need to proactively counter these evolving threats to protect sensitive data and organizational integrity.

Moreover, concerns about AI pollution and data manipulation were also raised during the discussion. Deliberate corruption of datasets by attackers could lead to misleading or malicious content, posing substantial challenges to organizations. To mitigate these risks, a reassessment of data-sharing practices and stringent data leak prevention strategies are imperative. Yoni explains:

“Any prompt can now become a way to learn new data, and it's easier to search on that data using generative AI. So, companies will probably change the way they are approaching data leak prevention and data sharing as a whole.”

Anticipating changes in security vendor strategies, Yoni suggested a potential return to more precise AI. This shift signifies a renewed focus on accuracy and efficacy in AI models, aiming to better serve the specific needs of customers. The diversification of AI models calls for a strategic reassessment among vendors to address the escalating threats effectively.

Metrics and Safeguarding AI Models

The discussion emphasized the importance of comprehensive metrics in evaluating AI's impact on cybersecurity. While mean time to respond (MTTR) remains a pivotal metric, precision, hit rate, coverage and lift were highlighted as equally critical. These metrics collectively gauge an AI model's effectiveness in detecting threats, highlighting important alerts, and resolving threats, all of which provide a holistic view of its performance.

To that end, safeguarding AI models emerged as a key concern. Yoni stressed the necessity of collaboration between cybersecurity experts and data scientists. Domain experts play a crucial role in ensuring the integrity of data used to train AI models, emphasizing the adage of "garbage in, garbage out." Yoni discusses this important alliance further:

“I think a key point that a lot of times is missing is that AI is not just being done by AI experts in a specific domain. When you're looking at how you build a cybersecurity AI model, you need to have both cybersecurity people, security researchers and a data scientist building that together. During that process of building it, when you're talking about data issues or data pollution or all these kinds of problems, you're expecting the domain expert to be able to say, ‘this data makes no sense in reality.’ Somebody's either playing with us, it's simulated data, it's fake data, and in that process of validation and training, you make sure that the data you put into the model is good. So it's a garbage in, garbage out problem. The people building those models need to be the gatekeepers of good data and then you get good results.”

Integration of AI in Personal and Professional Spaces

Yoni shared insights into his personal and professional integration of AI, showcasing its versatile utility. From aiding decision-making in finance and healthcare to assisting in language-related tasks, AI's pervasive influence across diverse domains became evident.

Additionally, the discussion highlighted the often unnoticed presence of AI in everyday experiences. Whether through network optimizations facilitating internet access or content filtering on social media platforms, AI silently shapes user interactions and experiences.

Addressing misconceptions, Yoni distinguished between AI and machine learning (ML). He emphasized the complexity of AI models, comprising intricate sets of rules, while also stressing adaptability and learning capabilities as defining factors:

“I'll start by saying that AI and ML generally  is a set of ‘if’ statements. It all comes down to if.  We're using binary computers, that's the way things are. I think focusing on whether it's a rule or not, is not the point.  I think the point should be: Does it behave in a way  that learns based on new data or is seeing so much data  that you can't really represent the set of ifs as something  that you can even comprehend? So, when you're thinking about  an ML model, even one that's, let's say, classifying bad files, it's essentially a very large set of rules, a thousand different rules  compacted together to become something we call an ML model.  Is it still rules behind the scenes? Potentially, you could describe it as a set of rules.  They're really, really hard to describe because there's thousands,  thousands, and thousands of them, and that's true  for any kind of AI including generative AI.”

The interview with Yoni unveiled crucial insights into the dynamic landscape of AI in cybersecurity, highlighting the pressing need for organizations to adapt to evolving threats, reconsider strategies, and prioritize safeguarding AI models. Collaboration between cybersecurity experts and data scientists has emerged as pivotal in leveraging AI effectively. In other words, AI in an enterprise environment might be viewed as a team sport, with respective roles and responsibilities all working holistically to achieve optimal outcomes.

As the cybersecurity landscape continues to evolve, it's imperative for practitioners to stay agile and proactive in integrating AI strategies that fortify defenses, protect sensitive data, and ensure resilience against emerging threats.

See how the Cortex platform is putting AI into the hands of defenders. Take the XSIAM tour today.


Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.