Remove Culture Remove Generative AI Remove Malware Remove Security
article thumbnail

Architect defense-in-depth security for generative AI applications using the OWASP Top 10 for LLMs

AWS Machine Learning - AI

Many customers are looking for guidance on how to manage security, privacy, and compliance as they develop generative AI applications. This post provides three guided steps to architect risk management strategies while developing generative AI applications using LLMs.

article thumbnail

Register now: GenAI, risk & the future of security

CIO

The promise of generative AI means we are on the cusp of a rethinking of how businesses handle cybersecurity. Along with the promise is the peril of AI being used to cause harm by launching more efficient malware, creating sophisticated deepfakes, or by unintentionally disclosing code or trade secrets.

Malware 130
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Counter AI Attacks with AI Defense

Palo Alto Networks

As a result, it is crucial for organizations to respond in kind by harnessing AI in their cybersecurity defense strategies. Precision AI by Palo Alto Networks is our proprietary AI system that helps security teams trust AI outcomes by using rich data and security-specific models to automate detection, prevention and remediation.

article thumbnail

10 things to watch out for with open source gen AI

CIO

Leaderboards are a good place to start when looking at open source gen AI, says David Guarrera, generative AI lead at EY Americas, and Hugging Face in particular has done a good job benchmarking, he says. So do open source LLMs release all that information? Companies are already very familiar with using open source code.

article thumbnail

Cybersecurity Snapshot: Check Out Our No-Holds-Barred Interview with ChatGPT

Tenable

Threat actors could potentially use an AI language model like ChatGPT to automate the creation of malicious content, such as phishing emails or malware, in order to conduct cyberattacks. However, it's important to note that AI language models like ChatGPT do not have the ability to initiate or execute malicious actions on their own.

article thumbnail

Radar Trends to Watch: May 2024

O'Reilly Media - Ideas

OpenAI has shared some samples generated by Voice Engine, their (still unreleased) model for synthesizing human voices. Things generative AI can’t do: create a plain white image. Ship it” culture is destructive. While this feature is useful for bug reporting, it has been used by threat actors to insert malware into repos.

article thumbnail

6 generative AI hazards IT leaders should avoid

CIO

OpenAI’s recent announcement of custom ChatGPT versions make it easier for every organization to use generative AI in more ways, but sometimes it’s better not to. But this wasn’t the first time Bing’s AI news added dubious polls to sensitive news stories.