Summary
ChatGPT is an artificial intelligence chatbot created by OpenAI, reaching 1 million users at the end of 2022. It is able to generate fluent responses given specific inputs. It is a variant of the GPT (Generative Pre-trained Transformer) model and, according to OpenAI, it was trained by mixing Reinforcement Learning from Human Feedback (RLHF) and InstructGPT datasets. Due to its flexibility and ability to mimic human behavior, ChatGPT has raised concerns in several areas, including cybersecurity.
Its popularity has grown specially because it provides fluent responses to questions about virtually any subject, including history, math, literature, and technology.
The chatbot is also a great tool for developers, able to create, review, improve, and explain code in multiple languages, such as C++, Java, Python, Go, and Rust. It also provides an API that can be used by plugins in different platforms such as Slack, Microsoft Teams, and IDEs like VS Code.
And what happens if someone asks ChatGPT to create malware or ransomware? No need to panic right now, as ChatGPT will not create novel, functional malware automatically. But it can still be abused by attackers to improve social engineering skills or to understand and build common malware components.
For social engineering, it can generate novel, fluent bait messages. For malware, it can serve as an interactive interface into the many malware tutorials and examples that already exist on the internet, but it still requires a certain knowledge from the person to use this tool to create a functional malware.
There is also good news for cybersecurity defenders. ChatGPT has also been trained on examples of common code vulnerabilities and can pick them out when you share the source code with ChatGPT. So, like many inventions, ChatGPT can be used for both good and evil. Here’s a closer look at how:
Malware Development
Can ChatGPT automatically create novel, functional malware? The short answer is no, but, the tool can be used by attackers to aid in the malware development process, especially by those without experience.
The first use case is to use it to learn and get code examples about malicious techniques. For example, we asked ChatGPT to explain details about a common process injection technique named Process Hollowing.
The answer was quite precise and included an example in C++, which could be easily modified to embed this functionality to a malware.
We asked more questions about specific techniques that could be incorporated into malware, such as downloading and executing files from remote URLs or how to encrypt a file using AES on Windows, and the chatbot was able to generate and explain all of them.
However, during our tests, we found that ChatGPT generated incomplete codes for questions that add a certain level of complexity. For example, the AI wasn’t able to provide us with the entire code for a program that searches for files on Windows and encrypts them using a combination of symmetric and asymmetric encryption, which is a behavior that could be used to create ransomware.
The AI is also able to identify malicious intent depending on how the question is asked. For example, we failed in obtaining a single code that downloads a base64 encoded executable and executes it using process hollowing.
But although the code wasn’t generated in a single answer, nothing stops the attacker from splitting the question by:
- Asking for a code example on how to download base64 encoded files;
- Asking for an example of Process Hollowing technique;
- Combine both answers to perform process injection with the downloaded file.
Therefore, ChatGPT won’t create a novel or a complete malware for someone, but it can indeed be used as a resource to study and create snippets of code that when combined could be used to create a functional malware.
Social Engineering
Attackers can use ChatGPT to improve their social engineering skills. It can help to write specific texts that would be used on phishing emails, redirecting victims to malicious websites or luring them into downloading attached malware.
For example, we asked ChatGPT to write an email that makes it look like the person just won $100,000 in a lottery named “BigMoney.” We specifically asked ChatGPT to randomly generate the ticket number and to convince the person to open a link that will supposedly claim the prize.
The text generated by the AI could be easily employed in phishing emails to redirect people to a website that attempts to steal sensitive information, such as email accounts and passwords.
Another example is spear phishing. With previous knowledge about the target, an attacker can provide details about the victim to ChatGPT and ask help to generate a text that convinces the person to open an attached file.
To exemplify this, we asked ChatGPT to generate an email from a person named Ruth to another person named Glenn, asking help on a fictional project. The project details would be attached in the email, which could be a weaponized file created by the attacker.
Attackers could also abuse ChatGPT to create fake news, luring victims to a website that steals sensitive data, such as email or bank accounts. As an example, we asked the AI to generate an email that contains fake news about the California government providing $5,000 to COVID-19 victims.