DeasileX

How Are Attackers Using ChatGPT To Write Malicious Code?

How Are Attackers Using ChatGPT To Write Malicious Code

According to research, cybercriminals are using ChatGPT, an AI-powered chatbot that responds to questions with human-like responses, to create harmful programs that can steal your data. If you are still wondering how are attackers using ChatGPT to write malicious code, this article will enlighten you. 

Researchers from Check Point Research (CPR) have discovered the first examples of cybercriminals exploiting ChatGPT to create malicious code. Threat actors are developing “infostealers,” encrypting tools, and aiding fraud in underworld hacking communities. Sounds awful, but it’s true. Let’s learn how are attackers using ChatGPT to write malicious code. 

How are attackers using ChatGPT to write malicious code? A minimum of three instances of black hat hackers using ChatGPT’s AI capabilities for nefarious reasons have been observed, according to experts at Check Point Research (CPR).

Let’s go through the article and explore more. 

How Are Attackers Using ChatGPT To Write Malicious Code?

Security researchers have already begun to test ChatGPT’s ability to produce malicious code, despite the fact that it has only recently been released. For instance, Dr. Suleyman Ozarslan, a security researcher as well as co-founder of Picus Security, reportedly exploited ChatGPT to develop both macOS malware and a fraud campaign.

Ozarslan said, “We started with a simple exercise to see if ChatGPT would create a believable phishing campaign and it did. I entered a prompt to write a World Cup-themed email to be used for a phishing simulation and it created one within seconds, in perfect English.”

Ozarslan “persuaded” the AI to produce a malicious email by posing as a security specialist with a company that simulates attacks and is looking to create a tool to simulate phishing attacks. ChatGPT nonetheless created the email even though it was aware that “phishing attacks can be used for malicious purposes and can cause harm to individuals and organizations.”

Following the completion of this task, Ozarslan asked ChatGPT to create Swift code that could locate Microsoft Office files on a MacBook and deliver them over HTTPS to a web server first encrypting the Office files on the MacBook. Without any prompts or warnings, the solution answered by producing a sample code. By posing as researchers or hiding their malevolent intents, fraudsters can easily get over OpenAI’s defenses, as demonstrated by Ozarslan’s study exercise.

Here is another example of how are attackers using ChatGPT to write malicious code.

On December 21, a threat actor going by the handle USDoD posted a Python script for decrypting as well as encrypting information using the Blowfish and Twofish encryption algorithms that he created using the chatbot. Although the code may be used for wholly good intentions, CPR researchers discovered that a threat actor could easily modify it such that it would execute on a device without any human involvement, turning it into ransomware in the process. Contrary to the information thief, USDoD appeared to possess very little technical know-how and even insisted that the Python script he produced with ChatGPT was the first program he had ever written, according to CPR.

How To Control Attackers Using ChatGPT To Write Malicious Code?

In an effort to prevent the abuse of their technologies, OpenAI and other creators of tools akin to theirs have implemented filters and restrictions and are continually enhancing them. Additionally, the AI tools are still glitchy and occasionally make what several researchers have referred to as flat-out errors, which could impede certain malevolent intentions, at least for the time being. However, many have projected that these technologies will have a significant long-term potential for abuse. Developers will have to train and enhance their AI engines to recognize queries that can be used maliciously in order to make it more difficult for criminals to abuse the technology, adds Shykevich.

Wrapping Up

Hope, this short article helped you to enlighten – how are attackers using ChatGPT to write malicious code. As we all understand, scientists are building technologies for the best of mankind; however, there are people who are using the same technology to cause harm. It’s our responsibility to use AI with dignity. Share your thoughts in the comment box below. Follow Deasilex for more updates on ChatGPT and OpenAI. 

Frequently Asked Questions

Q1. How Malicious Code Is Created?

However, a programmer who needs quick access to an application for debugging can also construct them. They might even be unintentionally produced by coding mistakes.

Q2. What Is Malicious Scripting?

A malicious script is a piece of code that has been altered by malicious attackers for evil intentions. Cyber threat actors conceal them on reputable sites, in third-party scripting, and in other locations to jeopardize the security of client-side web apps and webpages.

Q3. What Can A Malicious Code Do?

The term “malicious code” refers to a specific type of destructive computer code or web script intended to introduce system weaknesses that could result in back doors, security lapses, information and data theft, and other possible harm to files and computer networks. It’s a hazard that antivirus software might not be able to stop on its own.

Q4. What Is The Most Common Way Malicious Code Is Spread?

Phishing emails are by far the most popular way for hackers and state-sponsored hacking groups to disseminate malware. Hackers have gotten very good at creating emails that fool recipients into clicking links or downloading files with malicious software.

Q5. What Are The Four Primary Types Of Malicious Code Attacks?

Unplanned, deliberate, direct, or indirect attacks are the four main categories of malicious code attacks. The strategy of “defense in depth” involves layering measures to improve overall security and give responders more time to act after an incident.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top