5 Ways Hackers

5 Ways Hackers Will Use ChatGPT for Cyberattacks: Cybersecurity Trends

Found this terrific article on ChatGPT and cyber to share and the title says it all: 5 Ways Hackers Will Use ChatGPT for Cyberattacks!

The article, from Dotan Nahum at Information Security Buzz, discusses (wait for it!) 5 ways hackers will use ChatGPT for cyberattacks. Here are the 5 ways he identified:

Malware Obfuscation: Threat actors use obfuscation techniques to evolve malware signatures that bypass traditional signature-based security controls. Each time researchers at CyberArk interacted with ChatGPT, a distinct code was provided, capable of generating various iterations of the identical malware application. Therefore, hackers could use ChatGPT to generate a virtually infinite number of malware variants that would be difficult for traditional signature-based security controls to detect.

KLDiscovery

Phishing and Social Engineering: Phishing attempts were often easy to spot in the past due to poor grammatical and spelling errors. However, with ChatGPT, cybercriminals can create convincing and accurate phishing messages that are almost indistinguishable from legitimate ones, making it easier to trick unsuspecting individuals.

Ransomware and Financial Fraud: With its ability to generate human-like responses and understand natural language, hackers can use ChatGPT to craft spear-phishing emails that are more convincing and tailored to their targets, increasing the chances of success. For example, it can facilitate fraudulent investment opportunities and CEO fraud. Hackers can use it to generate fake investment pitches or emails impersonating CEOs or other high-level executives, tricking unsuspecting victims into sending money or sensitive information.

Telegram OpenAI Bot: Telegram OpenAI bot as a service has been a subject of interest for developers and hackers alike. Recently, Check Point Research discovered that hackers had found a way to bypass restrictions and are using it to sell illicit services in underground crime forums. The hackers’ technique involves using the application programming interface (API) for OpenAI’s text-DaVinci-003 model instead of the ChatGPT variant of the GPT-3 models designed explicitly for chatbot applications. The API versions do not enforce restrictions on malicious content. As a result, the hackers have found that they can use the current version of OpenAI’s API to create malicious content, such as phishing emails and malware code, without the barriers OpenAI has set.

Spreading Misinformation: The recent discovery of a fake ChatGPT Chrome browser extension that hijacks Facebook accounts and creates rogue admin accounts is just one example of how cybercriminals exploit the popularity of OpenAI’s ChatGPT to distribute malware and spread misinformation.

KLDiscovery

There’s a lot more to the article than just this brief summary of 5 ways hackers will use ChatGPT for cyberattacks, so check it out via the link above!

So, what do you think? Is the net impact of ChatGPT positive or negative when it comes to cyber risk for organizations? Please share any comments you might have or if you’d like to know more about a particular topic.

Disclaimer: The views represented herein are exclusively the views of the author, and do not necessarily represent the views held by my employer, my partners or my clients. eDiscovery Today is made available solely for educational purposes to provide general information about general eDiscovery principles and not to provide specific legal advice applicable to any particular circumstance. eDiscovery Today should not be used as a substitute for competent legal advice from a lawyer you have retained and who has agreed to represent you.

5 comments

  1. This will require a thread because my response will be long, but here goes:

    It is an interesting article but I think it simply parrots the analysis provided by Crowdstrike and Mandiant at the recent International Cybersecurity Forum in Lille, France, and RSA in San Francisco. And he misses the import. The ChatGPT AI threat doubles down. The cybersecurity implications are beyond comprehension – literally. I tried to explain this exponential inflection point in today’s “thoughts over morning coffee”.

    But as to cybersecurity, let me make a few points based on numerous chats I’ve had over the last month with some cybersecurity heavy weights, plus the 2-day workshop “LLMs and Cyberwar” held at NATO last week. On-site. Forced me to leave my island retreat. Damn this military “no more remote workshops” guys.

    Not only does ChatGPT fog the mirror when it comes to email, text and phone messages, but its immediate impact on “Shadow IT” is off the charts.

    In the past, we worried that we would fall for phony requests from the “CEO” to transfer large sums, execute a contract, or change bank accounts. We now know that these communications will appear as perfect as if the originator was asking in person. We need far better recognition and identification protocols than we employ today, or the average cost of credential theft to organizations, will triple, from its record 65% increase over the last 3 years. Dotan Nahum covers these points well in the article you linked.

    Bu the true threat … not properly covered in the article … is that the cyber mavens truly worry about the revolutionary low-code/no-code applications, that have been empowering business users to independently address their needs without waiting for IT, by building their own applications and automations. Generative AI, increases that power and reduces the barrier to entry to practically zero. I go on the Dark Web all the time to “shop” and the no-code/LLM applications have proliferated. They have empowered not just “normal” business users to independently address their needs without waiting for IT, by building their own applications, but given blackhats incredible power.

    [continues below]

  2. Embedding generative AI in low-code/no-code, turbocharges the business’ capability to move forward independently. Major low-code/no-code vendors have already announced AI copilots that generate applications based on text inputs. Analysts are forecasting a 5- to 10-times growth in low-code/no-code application development following the introduction of AI-assisted development. These platforms also allow the AI to easily integrate across the enterprise environment, gaining access to enterprise data and operations. Which is EXACTLY what blackhats are doing.

    Have we increased our security awareness training by 5 to 10 times since ChatGPT arrived? It has taken several decades for governments and scientists and business to accept global warming. It is taking decades for executives and far too many in security to accept the simply fact that internet assets including Domains and DNS are being weaponized to an exponential degree. Can we really hope that ChatGPT or similar warnings will have any different, quicker impact and acknowledgement?

    Our reality has already changed to a state where every conversation I have with a ChatGPT module, leaves behind an application. That application will undoubtedly plug into business data, be shared with other business users, and get integrated into business workflows.

    IN OTHER WORDS WE HAVE LOST EVEN A SEMBLANCE OF CONTROL OVER OUR ATTACK SURFACES.

    [continues below]

  3. Business users are now making decisions about where data is stored, how it is processed by their applications, and who can gain access to it – without any regard to the cybersecurity function.

    Pollyannaish folks believe we can simply ban “citizen development” or ask business users to get approval for any application or data access. Sort of like asking for a moratorium on generative AI development. This of course won’t work, and this capability is only the beginning of the headache with which network engineers and architects will soon have to grapple.

    And while I appreciate many comments I have seen that “red teams” will save the day, nobody in the cybersecurity industry believes that for a moment. I attended the “red team” events at the cybersecurity forum in Lille, and at DIC in Zurich, and my team did the same at RSA in San Francisco. And I have seen the reports on the internal “red teaming” events at OpenAI. OpenAI said it has “improved” GPT-4 to “better refuse” malicious cyber security requests. But in the “red teams” I saw users were able to trick ChatGPT into writing code for malicious software applications by entering a prompt that makes the artificial intelligence chatbot respond as if it were in developer mode. Completely jumps the safeguards. All of these “red team” results funnel back to OpenAI so I assume it tries to fix these issues – playing the proverbial whack-a-mole game.

    [continues below]

  4. I have no answers here. Just saying this is all much more exponentially complex than some of the media reports.

    And for your U.S. readers, brace yourself. The 2024 election massacre has begun. The “Philadelphia Inquirer” (the 3rd oldest newspaper in the US) had its offices shut down last weekend due to a massive cyber-attack. The attack disabled their CMS (content management system) which put all print versions on hold. Crowdstrike thinks hackers used polymorphic malware (you can Google that) which Dotan Nahum actually mentions in his article. Did they use ChatGPT? 🤷‍♂️

    But the bad guy in the cyber-attack appears to be ransomware. The paper’s operations appear to be leaning on remote work as the offices have been vacated and a temporary newsroom has been established in Center City. And the timing of the cyber-attack was particularly interesting as it came 2 days ahead of a mayoral primary and a special election for a state House seat.

    The 2024 election massacre has begun.

Leave a Reply