Cybersecurity experts have shown that ChatGPT creates mutating malware that can avoid detection by endpoint detections and response (EDR) applications.
According to this article by CSO (ChatGPT creates mutating malware that evades detection by EDR, written by Shweta Sharma and available here), a recent series of proof-of-concept attacks show how a benign-seeming executable file can be crafted such that at every runtime, it makes an API call to ChatGPT. Rather than just reproduce examples of already-written code snippets, ChatGPT can be prompted to generate dynamic, mutating versions of malicious code at each call, making the resulting vulnerability exploits difficult to detect by cybersecurity tools.
“ChatGPT lowers the bar for hackers, malicious actors that use AI models can be considered the modern ‘Script Kiddies’,” said Mackenzie Jackson, developer advocate at cybersecurity company GitGuardian. “The malware ChatGPT can be tricked into producing is far from ground-breaking but as the models get better, consume more sample data and different products come onto the market, AI may end up creating malware that can only be detected by other AI systems for defense. What side will win at this game is anyone’s guess.”
There have been various proof of concepts that showcase the tool’s potential to exploit its capabilities in developing advanced and polymorphic malware.
ChatGPT and other LLMs have content filters that prohibit them from obeying commands, or prompts, to generate harmful content, such as malicious code. But content filters can be bypassed.
Almost all the reported exploits that can potentially be done through ChatGPT are achieved through what is being called as “prompt engineering,” the practice of modifying the input prompts to bypass the tool’s content filters and retrieve a desired output. Early users found, for example, that they could get ChatGPT to create content that it was not supposed to create — “jailbreaking” the program — by framing prompts as hypotheticals, for example asking it to do something as if it were not an AI but a malicious person intent on doing harm.
“ChatGPT has enacted a few restrictions on the system, such as filters which limit the scope of answers ChatGPT will provide by assessing the context of the question,” said Andrew Josephides, director of security research at KSOC, a cybersecurity company specializing in Kubernetes. “If you were to ask ChatGPT to write you a malicious code, it would deny the request. If you were to ask ChatGPT to write code which can do the effective function of the malicious code you intend to write, however ChatGPT is likely to build that code for you.”
With each update, ChatGPT gets harder to trick into being malicious, but as different models and products enter the market, we cannot rely on content filters to prevent LLMs from being used for malicious purposes, Josephides said.
Malware obfuscation was one of the 5 ways hackers will use ChatGPT for cyberattacks discussed here a couple of weeks ago. This article goes into a lot more depth on the issue and the potential risks. Be careful out there!
So, what do you think? Are you concerned about the fact that ChatGPT creates mutating software that can avoid EDRs? Please share any comments you might have or if you’d like to know more about a particular topic.
Disclaimer: The views represented herein are exclusively the views of the authors and speakers themselves, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.
[…] ChatGPT Creates Mutating Malware that Evades EDR Detection: Cybersecurity just keeps getting tougher. […]