Weaponization of AI by Cyber Criminals

The Weaponization of AI by Cyber Criminals: Cybersecurity Trends

A recent article explores weaponization of AI by cyber criminals vs. the use of AI by defenders against cyberattacks in an ultimate struggle for dominance.

The California Western Law Review article is titled The Darwinian Effect: The Weaponization of Artificial Intelligence By Cyber Criminals (written by Justine Phillips, Avi Toltzis, Victoria Fanous and Gaurav Lalsinghani and available here). In it, the authors explain how AI is being used to enhance social engineering attacks, create synthetic media like deepfakes, and develop sophisticated malicious code. As a result, the cybersecurity landscape is evolving into a Darwinian struggle where the most adaptable and resilient will survive.

The 28-page article highlights real-world case studies, such as the 2019 synthetic audio whaling scam and the 2024 Hong Kong deepfake video call fraud, demonstrating the evolution and impact of AI-powered cybercrime. It also examines methods and risks associated with AI-driven cyberattacks. As of 2024, cybercriminals are using AI in three main ways:

Advertisement
ReVia
  • Socially Engineering Humans: Using aggregated stolen data to manipulate and deceive individuals.
  • Creating Synthetic Media and Deepfakes: Using audio and video filters to convincingly impersonate individuals.
  • Creating Malicious Code: Using AI to develop and deploy sophisticated malicious code, like polymorphic malware.

The article even discusses the potential for cyber criminals to leverage quantum computing enable them to decrypt encrypted data, thereby making data easily accessible and more appealing to theft attacks.

To address the weaponization of AI by cyber criminals, the authors propose a multi-layered risk mitigation strategy, emphasizing the importance of combining human awareness, robust processes, and advanced technologies to combat these threats and build cyber resilience. The article goes on to explain defense strategies against AI-driven cyberattacks leveraging (guess what?) people, process, and technology:

  • People: Employee awareness training focused on recognizing AI-enhanced social engineering tactics and deepfakes is crucial.
  • Processes: Implementing secure verification procedures for sensitive requests, such as wire transfers and password resets, is essential.
  • Technology: Deploying advanced email filters, multi-factor authentication, anomaly detection algorithms, and deepfake detection tools can strengthen defenses.

As the authors note in the Conclusion: “Malicious use of AI will evolve quickly, and only the most resilient to adaptation and change will prevail…Cyber-resilience by design is a necessity to survive in a world where AI has been weaponized by cyber criminals.” The AI arms race is on, only the fittest will survive.

Hat tip to Sheila Grela for the heads up on this article!

Advertisement
Insight Optix

So, what do you think? What is your organization doing to combat the weaponization of AI by cyber criminals? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.


Discover more from eDiscovery Today by Doug Austin

Subscribe to get the latest posts sent to your email.

Leave a Reply