ChatGPT - Potentially Enabling Bad Actors

6 min read

CMMC (Cybersecurity Maturity Model Certification) is a model cybersecurity framework managed and controlled by the Department of Defense (DoD). On the 4th of November, the DoD unveiled a change to the model via what they call CMMC 2.0.

ChatGPT and Other LLMs

Many articles have appeared about how ChatGPT works. We won't rehash that information here. See ref 1 below for an excellent technical discussion on how ChatGPT and similar LLM systems work.

The number of LLMs will increase significantly during 2023. The most significant alternative to ChatGPT will likely come from Google. They have had LLM systems training internally over the past few years. Google publicly stated that they planned to move slowly and build safeguards into the technology before public release. The buzz around the availability of ChatGPT seems to have forced Google to accelerate its plans, and they are holding an 8th of February 2023 event related to AI tools and search.

Other commercial and open-source LLM systems are available, and more are likely to follow. Microsoft is a significant investor in the commercial arm of OpenAI and plans to integrate OpenAI LLMs and other technologies into the Bing search engine and Office products. Generalizing generative AI for mass use will likely be a defining feature of 2023 and beyond.

The Cyberthreat from ChatGPT and other LLMs

Google was right to be cautious about releasing LLM-based systems for general public use. In the few months that ChatGPT has been available, cybersecurity researchers, and bad actors on the dark web, have demonstrated using it to enhance cyberattacks. Technology demonstrations show how it's simple for anyone to use ChatGPT to write convincing emails and other text for Phishing attacks. Also, the generative capabilities of the system extend to the generation of runnable computer code in response to simple English input.

OpenAI has rudimentary guardrails in place to prevent bad actors from using ChatGPT to generate output usable for cyber activity (and to prevent harmful output such as racism, homophobia, etc.), but it is easy to circumvent these limits and get ChatGPT to deliver any output wanted.

Hopefully, OpenAI will improve the protections to prevent use by bad actors as the beta period progresses, and any Google LLM will have good protection against bad use from day one.

Here are three areas where bad actors currently use ChatGPT to enhance their criminal cyber activity.

Generating More Convincing Phishing Emails

The main threat from ChatGPT today is its use to help generate more convincing texts to use in phishing emails or other social-engineering-based attacks. This help comes in multiple ways. The simplest is allowing non-native speakers of a language to use ChatGPT to generate language that is better than they could come up with themselves or via a tool like Google Translate. This is not limited to creating English-based phishing text for non-English speakers. ChatGPT knows many languages and takes input, and can produce convincing output in all major languages. This allows bad actors to enter the email in their native language and get it convincingly translated into others they don't speak.

Criminals can also use ChatGPT to take the creation of their phishing attack texts to the next level. Because ChatGPT uses a chatbot user interface, bad actors can keep asking the LLM to change its output multiple times to generate text targeted at specific people or organizations using language more likely to trick them. An example of using ChatGPT in a back-and-forth way to craft a phishing email gets shown in the CSO article linked from ref 2. This iterative generation of phishing attack text is ideally suited to generate spear-phishing attacks that target associates of high-impact targets such as business executives and politicians.

In addition to text for use in Phishing emails, ChatGPT can also output text that is almost indistinguishable from that written by a human to use in social media posts, fake press releases, web pages, YouTube video descriptions, and other collateral that cybercriminals can use to build a fake online presence to help trick people into falling for phishing and other attack vectors.

At the technical level, using ChatGPT to write more natural attack emails might mean that phishing emails are not blocked by current SPAM and malware content checkers.

Writing Malware Code

Not only can ChatGPT output persuasive texts in multiple human languages, but it can also output computer code in several programming languages. This includes writing minimized JavaScript functions that can be embedded in webpages for browsers to run. Bleeping Computer (ref 3) reported that they were able to get ChatGPT to write a JavaScript that detected credit card numbers plus their expiration date, CVV digits, billing address, and other payment information when entered into a web page. SC Magazine also reports how ChatGPT was used to generate code to scan text on a website for social security numbers (ref 4).

A number of these code generation uses for ChatGPT have been seen by security researchers who monitor activity on the Dark Web and other cybercriminal forums. Worryingly, many discussions about this use of ChatGPT include bad actors who don't have the skills to write malware-related code themselves. So, the LLM will increase the threat from criminals like these.

It's true that many of the bits of malware code generated by ChatGPT are not very sophisticated and that additional skills and steps are needed to use them effectively. Unfortunately, ChatGPT can also help there.

Building an Attack Chain

Over a couple of blog posts in December and January, researchers at Checkpoint outlined how low-skilled bad actors can use ChatGPT to build an attack chain (refs 5 & 6). It wasn't a very sophisticated attack chain, but it was a proof of concept and showed how interacting with the LLM could improve the effectiveness of the code generated to make it more dangerous. The availability of ChatGPT and its output enables bad actors without the skills to build an attack chain to piece one together and then improve it over time via iteration. The skills gap to mounting cyberattacks is eroding. To quote the researchers (ref 6):

“And just like that, the infection flow is complete,” the researchers wrote. “We created a phishing email, with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs, and all that’s left for us to do is to execute the attack.”

Others have mapped MITRE ATT&CK framework sections to how ChatGPT can help cyber criminals. They say it can help them with tasks under collection, exfiltration, initial access, lateral movement, and reconnaissance. They also predict that as LLMs evolve, hackers will be able to use them for persistence and privilege escalation tasks.

Robust Cyber Defense Is Still the Best Protection

ChatGPT is a technology that has both good and bad purposes. The history of the cybersecurity space tells us that bad actors will exploit any technology if they can. LLMs are no exception.

The good news is that the attacks that ChatGPT assists with are known and are the same threats that we've been defending against for years. The LLMs will make it easier for more cyber criminals to raise their game, especially for text generation in phishing emails and other social engineering attacks. Over time they will likely learn how to use ChatGPT to generate more sophisticated malware code and attack chains.

To thwart the bad actors using the new LLM tools will require focusing on the protections and user training we currently use. Making people wary of any email that asks them for information is critical. It's better for people to be over-cautious than to fall for a ChatGPT-generated email.

As bad actors use LLMs to help them, defenders will also need to use AI systems to counter attacks. We are entering a new phase of the ongoing technological race between cyber defenders and attackers. Both sides will use AI tools.

From a defensive standpoint, human cybersecurity expertise will continue to be crucial - the human brain is our best neural network! Defending organizations in this brave new world will require experts focused on emerging threats, attack vectors, and how to defend against them. Cybersecurity is not a part-time activity for IT teams in addition to their other business-supporting functions.

Few organizations have the resources or the ability to build and retain the dedicated cybersecurity team needed to properly defend against the modern threat landscape 24*7. Critical Insight can provide the cybersecurity advice, services, and training your organization needs. See our Cybersecurity-as-a-Service (CSaaS) page for details. Use the form below to contact us and start a conversation about how we can partner to protect your staff, data, and IT infrastructure from general and ChatGPT-assisted attacks. 

References

1. Towards Data Science - How ChatGPT Works: The Model Behind The Bot - https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286

2. CSO: How AI chatbot ChatGPT changes the phishing game - https://www.csoonline.com/article/3685488/how-ai-chatbot-chatgpt-changes-the-phishing-game.html

3. Bleeping Computer: OpenAI's new ChatGPT bot: 10 dangerous things it's capable of - https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/

4. SC Magazine: Security risks of ChatGPT and other AI text generators - https://www.scmagazine.com/resource/emerging-technology/security-risks-of-chatgpt-and-other-ai-text-generators

5. Checkpoint OPWNAI: Cybercriminals starting to use ChatGPT - https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

6. Checkpoint OPWNAI: AI that can save the day or hack it away - https://research.checkpoint.com/2022/opwnai-ai-that-can-save-the-day-or-hack-it-away/