Rick Grinnell
Contributor

The AI cat and mouse game has begun

Opinion
Apr 29, 20245 mins
Artificial IntelligenceSecurity

Proactive adoption of sophisticated AI-based solutions is necessary to stay ahead of digital imposters.

Angry stressed businessman disputing on phone with partner. Outraged african american manager, lawyer, banker, consultant or real estate agent listening bad news from client at office.
Credit: fizkes / Shutterstock

If you are a CIO or CISO and haven’t yet read this article – Finance worker pays out $25 million after video call with deepfake ‘chief financial officer,’ you should and then share it with your entire company. It could save your company millions, and potentially much more.

The incident involved scammers who, using publicly available videos and audio from YouTube of targeted senior executives, created deepfake representations to deceive a finance employee into executing multiple transactions to bank accounts in Hong Kong, resulting in significant financial damage to the company. The digital impostors mimicked the finance worker’s actual team with disturbing accuracy.

This real-world deepfake event is precisely what CIOs and CISOs have been worried about for years. 

Fight AI fraud with AI tools 

What was once hypothetical, has now become very real, further raising the bar for CISOs and CIOs in their daily battle with fraud. Protecting networks from hacks like this means going beyond having employees change passwords even with regular frequency. Two-factor authentication practices just won’t cut it. 

To counter AI-generated threats, CIOs and CISOs must deploy AI-based defensive measures. Cutting-edge solutions are available that utilize AI for detecting and validating identities and authorizing transactions in real time, offering a potent countermeasure against these sophisticated attacks.

AI-based identity management and access control technologies are essential for enhancing cybersecurity measures. These solutions, leveraging mobile cryptography, device telemetry, and AI algorithms, are effective in neutralizing deepfake and mobile injection attacks, thus protecting the identities of employees, partners, and customers.

Social engineering for access

Hackers, like the ones referenced above, are often motivated by financial gain, but their intentions may also be to create a political disturbance or simply ruin a company’s reputation, among other reasons.

Typical tactics may involve phishing emails or deceptive social media messages designed to steal company credentials. Last fall, I wrote about the attacks on MGM Resorts International and Caesars Entertainment perpetrated by the hacker groups BlackCat/ALPHV and Scattered Spider. This ransomware-based crime involved hackers demanding cash payments from the companies after hacking into databases that included members’ driver’s license information and Social Security numbers.

No AI-based deepfake technology was used in these attacks. Rather, it was a more low-tech approach, leveraging social engineering to emulate an employee’s identity, and the fooled IT helpdesk provided access. Lesson learned: once access is given, it’s too late. These attacks will become even more common and severe as AI enters the equation.

Whom can you trust?

Distributed teams and remote workers make this problem worse. It’s impossible for the help desk to validate every one of these employees, even with the aid of visual verification. This problem gets worse as you consider business partners, customers, and third-party vendors. You may work with several third parties whom you trust with network and data access but may need to learn more about them and their employees to mitigate risk. For example, a faked vendor’s voice may want to confirm shipments or validate payment instructions. Someone in your organization may have met that vendor at some point, believe that the fake is actually real, and might willingly provide account information, much like the Hong Kong finance manager did. What happens then?

The key is to ‘shut your front door’ using new AI solutions that assist with managing credentials, verifying employee identity, and limiting access control. Some software products already combine behavioral and biometric signals in real-time applications to ensure true identity and access privileges.

  • Protect Your Digital Presence: Utilize AI to safeguard social media and online assets. With the rise of spoofed business pages on platforms like Instagram or Facebook, it’s crucial to defend against the potential damage to sales, reputation, and customer trust.
  • Defend Against Deepfakes: AI-based real-time identity verification tools are vital in combating deepfake threats, ensuring secure transactions and account modifications by verifying user identities.
  • Validate Every Interaction: In an era where identity and credential spoofing are rampant, CIOs and CISOs must ensure the integrity of every transaction and identity verification process.

Recently, hackers even found ways to hack into stored biometrics authorization files through an iOS and Android trojan called GoldPickaxe. This should set off other alarms, as we now see that biometrics that have been stored previously to match your fingerprint or face scan are susceptible to attack.

In response to these evolving threats, comprehensive training on the risks of identity theft, AI fraud, and deepfake scams is essential. However, education alone is not enough, and you can’t do all this manually. Proactive adoption of sophisticated AI-based solutions is necessary to stay ahead of these cybercriminals.

The question isn’t just whether you’re ready, but how quickly and effectively you can adopt AI-based analytic solutions for identity verification and access authorization. The AI game of cat and mouse has begun – are you ready?