top of page
cyber hacker beagle thinking.jpg

Cybercriminals Using GhostGPT

ChatGPT's "evil twin," GhostGPT, has gained popularity among cybercriminals on underground forums.


Historically, ChatGPT and other generative AI tools have become widely used in everyday tasks, with their popularity surging in 2023. To mitigate cyber risks and promote safe usage, ChatGPT is designed with safety barriers and guidelines to prevent the facilitation of illegal activities, including cybercrime.


However, in a game of cat and mouse, cybercriminals began developing their own underground AI tools in 2023. One notable example is WormGPT, an AI chatbot marketed primarily for malicious purposes such as Business Email Compromise (BEC). BEC is a form of social engineering used by attackers to deceive users into transferring money or disclosing sensitive company information.


This trend eventually led to the development for other tools like WolfGPT and EscapeGPT, which enabled cybercriminals to conduct sophisticated phishing attacks. These tools could craft highly convincing emails that bypass user security training, mimicking the style and tone of legitimate communications historically sent within companies.


GhostGPT and Its Capabilities

What makes GhostGPT particularly appealing to cybercriminals is its ability to empower low-skilled threat actors to craft and execute successful campaigns with ease.

According to security researchers at Abnormal Security, this AI tool is marketed as a "jailbroken" version of ChatGPT. Meaning, all the traditional guardrails and security measures have been removed. As a result, users can develop their own malicious campaigns, with a single question, and without encountering ChatGPT's safeguards or content restrictions.


GhostGPT is also promoted as being uncensored, fast, free of audit logging, and highly accessible. Further enhancing its appeal in the cybercrime community.


Abnormal Security Research
Abnormal Security Research

Researchers at Abnormal Security have identified GhostGPT to not only be used to craft email campaigns, but can be used to craft other malicious materials such as different malware families.


With Abnormal Security confirming over a thousands of views from the few days of it being posted, GhostGPT is becoming an emerging AI threat compared to the historical cybercrime promoted AI applications. Purchasing this tool is easily done through a telegram bot and can be used with little to no training, education, or computing skills. Essentially allowing novice cybercriminals to craft any type of malware or phishing campaign with a single question or command.


User Education

Emerging AI threats and tools is something that will not go away and should be calculated as a known threat targeting businesses or personal day-to-day internet use. Below are some education tips you can provide for awareness towards safe-guarding against AI threats:

  1. Be cautious on using open GenAI tools. It is important to remember nothing is "free" and often causes on how data is shared and handled. Even to those who should not have access to it.

  2. When receiving emails, look at the sender to ensure the sender is someone who you expect it to be.

  3. Carefully read emails. Determine if emails sound too formal or lack personalization. AI generated emails might often sound too automated, complex sentences and words, or contain perfect grammar.

  4. Provide news feeds for your business to learn more about cyber and what is happening in the community. Having proper education and awareness with what is going on can allow others to identify potential cyber threats and risks in the business.



 
 
 

Comments


bottom of page