article thumbnail

The Dark Side of AI in Cybersecurity — AI-Generated Malware

Palo Alto Networks

In a thought-provoking interview on the Threat Vector podcast , Palo Alto Networks researchers Bar Matalon and Rem Dudas shed light on their groundbreaking research into AI-generated malware and shared their predictions for the future of AI in cybersecurity. And there is a bit of a longer version for that answer.

Malware 91
article thumbnail

For security leaders, AI is a mounting peril and an emerging shield

CIO

Malware, phishing, and ransomware are fast-growing threats given new potency and effectiveness with AI – for example, improving phishing attacks, creating convincing fake identities or impersonating real ones. Where needed, these platforms can be augmented by specialized security tools targeting specific vulnerabilities.

Security 318
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

More connected, less secure: Addressing IoT and OT threats to the enterprise

CIO

Malware is the top threat to IoT/OT With so many vulnerabilities plaguing IoT devices, these devices are attractive and relatively easy entry points into corporate networks for attackers. In fact, two notorious botnets, Mirai and Gafgyt, are major contributors to a recent surge in IoT malware attacks.

IoT 322
article thumbnail

The essential AI checklist: Future-proof your workforce in six simple steps

CIO

Understand your team’s skills gaps What AI training will be needed? What level of training is sufficient? Award-winning HP Wolf endpoint security uses AI-based protection to defend against known and unknown malware. Meet your people where they’re at What are specific personas tasked with? What AI tools are they already using?

Windows 284
article thumbnail

Dulling the impact of AI-fueled cyber threats with AI

CIO

While LLMs are trained on large amounts of information, they have expanded the attack surface for businesses. From prompt injections to poisoning training data, these critical vulnerabilities are ripe for exploitation, potentially leading to increased security risks for businesses deploying GenAI.

article thumbnail

How AI continues to reshape the cybersecurity arsenal

CIO

Having a SAST tool that identifies the common pattern of bugs in developer code and curates (let’s say) training sessions, or (even better) looks out for those vulnerabilities more thoroughly and with stricter rule sets, can very well prove to be a game-changer.

Security 320
article thumbnail

10 things to watch out for with open source gen AI

CIO

Even if you don’t have the training data or programming chops, you can take your favorite open source model, tweak it, and release it under a new name. Apple actually released not just the code, but also the model weights, the training data set, training logs, and pre-training configurations.