Aaron Painter
by Aaron Painter

The cyber pandemic: AI deepfakes and the future of security and identity verification

Opinion
May 02, 20245 mins
Artificial IntelligenceSecurity

Attackers have seen huge success using AI deepfakes for injection and presentation attacks – which means we’ll only see more of them. Advanced technology can help prevent (not just detect them).

Serious busy mature professional business man company ceo executive manager investor looking at laptop pc computer sitting in office thinking of corporate management plan and financial strategy risks.
Credit: insta_photos / Shutterstock

Security and risk management pros have a lot keeping them up at night. The era of AI deepfakes is fully upon us, and unfortunately, today’s identity verification and security methods won’t survive. In fact, Gartner estimates that by 2026, nearly one-third of enterprises will consider identity verification and authentication solutions unreliable due to AI-generated deepfakes. Of all the threats IT organizations face, an injection attack that leverages AI-generated deepfakes is the most dangerous. Recent stories show that deepfake injection attacks are capable of defeating popular Know Your Customer (KYC) systems – and with a 200% rise in injection attacks last year and no way to stop them, CIOs and CISOs must develop a strategy for preventing attacks that use AI-generated deepfakes.

First, you’ll need to understand exactly how bad actors use AI deepfakes to attack your systems. Then, you can develop a strategy that integrates advanced technologies to help you prevent (not just detect) them.

The digital injection attack

A digital injection attack is when someone “injects” fake data, including AI-generated documents, photos, and biometrics images, into the stream of information received by an identity verification (IDV) platform. Bad actors use virtual cameras, emulators, and other tools to circumvent cameras, microphones, or fingerprint sensors and fool systems into believing they’ve received true data.

Injection attacks are now five times more common than presentation attacks, and when used in combination with AI-generated deepfakes, they’re nearly impossible to detect. Attackers use deepfake ID documents to fool KYC processes or inject deepfake photos and videos to spoof facial biometrics systems. A prime example is the recent attack that injected an AI deepfake video feed to defraud a Hong Kong company for $25 million. As expected with the rise of Generative AI, AI deepfakes are also on the rise, with Onfido reporting a 3,000% increase in deepfake attacks last year. The NSA, FBI, and CISA collaboratively shared their concerns about the threat of AI deepfakes, saying that, “The increasing availability and efficiency of synthetic media techniques available to less capable malicious cyber actors indicate these types of techniques will likely increase in frequency and sophistication.” 

The key to stopping injection attacks is to prevent digitally altered images or documents from being introduced in the first place. And the only way to do this is to leverage advanced security technologies such as mobile cryptography. The cryptographic signatures provided by mobile devices, operating systems, and apps are practically impossible to spoof because they’re backed by the extremely high-security practices of Apple and Android. Using mobile cryptography to determine the authenticity of the device, its operating system, and the app it’s running is a crucial and decisive measure for stopping injection attacks in their tracks.

The presentation attack

Presentation attacks present fake data to a sensor or document scanner with the intent to impersonate an end user and fool a system into granting access. Facial biometrics presentation attacks take many forms, using deepfake ID documents, “face-swaps,” and even hyper-realistic masks to impersonate someone. IDV and KYC platforms use presentation attack detection (PAD) to verify the documents and selfies that are presented, but many PAD techniques can be beaten by injection attacks that leverage AI deepfakes. 

Staying ahead of injection and presentation attacks

Over the past couple of years, we’ve seen thousands of companies fall victim to these attacks. The impacts are incalculable: hundreds of millions of dollars looted, ransomware shutdowns that impact millions of people, personal information stolen, and reputations damaged beyond repair. And the problem is only getting worse. 

The only strategy for stopping these attacks is to use identity verification tools that prevent injection attacks from happening in the first place and then apply focus on verifying the actual person behind the screen. This way, IT organizations can also shut down human social engineering vectors that circumvent or exploit IDV processes. In addition, by adding verification technologies like device intelligence, AI models, and behavioral biometrics, IT organizations can further reduce the risk of first-party fraud. Finally, invest in solutions that protect your multi-factor authentication (MFA) and password recovery processes: this is a primary attack vector and a key vulnerability that companies often overlook.

Attackers have seen huge success using AI deepfakes for injection and presentation attacks – which means we’ll only see more of them. The key to stopping this threat is to develop a multi-layered approach that combines PAD, injection attack detection (IAD), and image inspection. This strategy forms the basis for companies to navigate the “cyber pandemic” we face and onto a more secure, trusted future.

Aaron Painter
by Aaron Painter
Contributor

Aaron Painter is a deepfake expert and CEO of Nametag Inc., the world's first identity verification platform designed to safeguard accounts against impersonators and AI-generated deepfakes.