Cybercriminals are actively implementing artificial intelligence to scale attacks and generate profit. Malware generation, automated phishing, and self-learning AI worms are just some of the tools used in modern cybercrime. dev.ua tells how malicious AI models are changing the cyberthreat landscape.
NOTE: The article contains some examples of using modified versions of artificial intelligence models. The dev.ua team condemns and does not promote the use of these practices for any criminal intent.
As generative AI has become more widespread, attackers have adapted it for their own purposes. In recent years, several uncontrolled AI chatbots and even a large language model (LLM) designed specifically for cybercriminals have emerged. These tools give attackers unrestricted access to information that conventional AI systems would block due to ethical and security constraints.
The widespread use of such technologies, combined with open personal data and hacking tools from darknet forums, simplifies entry into the cybercriminal world, allowing beginners to quickly get started and experienced hackers to refine their methods.
WormGPT
WormGPT was developed by a 23-year-old Portuguese programmer who was selling illegally obtained information on the darknet. WormGPT offers fast responses and unlimited message length, allowing users to communicate seamlessly with the chatbot. WormGPT’s full list of features includes:
- Hacking social media accounts — supports attacks on Facebook, Instagram, TikTok, Telegram, Viber, WhatsApp, and other platforms.
- Malware creation — can generate viruses, keyloggers, and other types of malicious software.
- Device hacking — allows you to attack PCs, laptops, mobile devices (Android, iOS), IoT devices, and cameras.
- Phishing and social engineering — supports the creation of spam campaigns, phishing emails, and manipulative schemes.
- Website hacking and DDoS attacks — used to attack websites, applications, and conduct DDoS attacks.
Users can choose between different AI models for general or specialized use, and save and review their conversations at any time. In addition, WormGPT has advanced features in beta, such as “contextual memory” to support continuous dialogues and “code formatting” for convenient structuring and use of malicious scripts.
For access to WormGPT, attackers ask for between $100 and $500.
Cybersecurity experts SlashNext gained access to WormGPT and discovered that the AI module is based on the GPTJ language model, developed in 2021. WormGPT is said to have been trained on a variety of data sources, with a particular focus on malware-related data. However, the specific datasets used during the training process remain confidential.
In one of SlashNext’s experiments, WormGPT was tasked with generating an email to trick an unsuspecting account manager into paying a fraudulent invoice.
WormGPT created an email that was not only extremely persuasive but also strategically cunning, demonstrating its potential for sophisticated phishing and BEC (business email compromise) attacks.

You’ve probably noticed how the quality of phishing emails has improved — the Nigerian prince has already given away his inheritance, and “hell flour” is almost nonexistent. It’s now very difficult to distinguish the text of a letter from a living person from one generated by AI. Creating convincing phishing emails based on communication patterns, a data set, and in different languages is done with the click of a button. It’s highly likely that fraudulent versions of GPT were used for this.
Traditional email security tools can effectively block basic attacks that contain clearly malicious links or embedded code, include suspicious attachment types, use known bad phrases, or originate from domains with negative reputations. This is because signature-based solutions such as SEGs (secure email gateways) rely on prior knowledge of threats rather than adaptive analysis — meaning they can only detect an attack when a message has characteristics that are already known to be malicious. But this WormGPT-generated email makes the text unique and SEG has nothing to fall back on.
According to it takes an average company 261 days to fix a phishing breach. The good news is that the developer of WormGPT was arrested by Portuguese police on February 12 of this year, the bad news is that WormGPT is still being sold and used.
FraudGPT
First spotted on July 25, 2023, FraudGPT was advertised as a tool designed specifically for cybercriminals, offering features to create phishing emails, generate malicious code, and even provide hacking tutorials. Like WormGPT, FraudGPT lacks built-in controls and restrictions that prevent ChatGPT from executing or responding to inappropriate requests.
FraudGPT was created by a user under the pseudonym CanadianKingpin12, who advertised his model on various cybercrime forums as a successor to WormGPT. The Netenrich team the seller of FraudGPT as someone who previously offered “hacker-for-hire” services and was associated with the development of WormGPT. There is speculation that WormGPT and FraudGPT may have been created by the same group, as they had a similar set of capabilities and marketing styles.
The cost of a FraudGPT subscription varies from $90 per month to $1,900 per year. The malicious seller reported 3,000 confirmed sales and reviews. If you click on the payment link, you can see 23,194 deposits to the attacker’s crypto wallet, totaling $37 million, but it is likely that the wallet is not only used to sell the criminal GPT model.
After CanadianKingpin12 was linked to both criminal services, a message appeared on the WormGPT website stating that WormGPT was superior to FraudGPT, although it had some shortcomings, thus focusing attention on the fact that different individuals were behind these products. Following the arrest of the WormGPT developer, these claims can be confirmed.
Instead, the creator of FraudGPT claimed advantages over WormGPT and even hinted at the development of new AI bots, such as DarkBERT and DarkBART, which would have integrated access to the Internet and Google Lens for image analysis. However, despite attempts to keep the tool available, many ads for selling access to FraudGPT have disappeared, which may indicate its decline in availability.
DarkBERT was previously trained on a language model created by data analytics company S2W, which was specifically trained on a large body of text from the dark web. The original version of DarkBERT from S2W was designed primarily to fight cybercrime, not the other way around.
Due to rule violations, threads promoting the sale of FraudGPT were often deleted on major cybercrime forums, forcing the creator to move promotion to decentralized platforms such as Telegram, where restrictions were less strict.
Morris 2.0
In early March, a group of scientists created Morris 2.0, the world’s first artificial intelligence-based worm that infiltrates email systems to read content and distribute malware without user interaction.
The original Morris, named after Cornell student Morris, made quite a splash in 1988. It was intended to highlight security vulnerabilities, but a coding error turned the harmless demonstration into a real killer, causing significant system slowdowns and crashes. Cornell Morris became the first person to be convicted under the Computer Fraud and Abuse Act.
Unlike traditional viruses and worms, which rely on pre-defined replication algorithms, Morris 2.0 leverages the power of generative AI to adapt and evolve. The primary means of Morris 2.0’s transmission is through compromised AI systems. These systems serve as a breeding ground for the malware, allowing it to infiltrate and spread across networks with unprecedented efficiency.
Once inside a target system, the worm uses AI to analyze the system architecture, identify vulnerabilities, and develop customized attack vectors. This dynamic approach makes Morris 2.0 not only highly elusive, but also incredibly resistant to traditional cybersecurity measures.
And although Morris 2.0 was created in a safe environment, the main goal for now is to keep this genie out of the bottle, as its features were demonstrated on popular AI models ChatGPT and Gemini.
How to fight?
Most cybersecurity companies suggest driving a wedge with a wedge and bringing in AI models. There are several research groups of “ethical” hackers who are creating their own malicious AI models on a tether.
Mithril Security has created a tool called PoisonGPT to test how the technology can be used to intentionally spread fake news online and use it for mass disinformation campaigns. Mithril Security will soon launch AICert, an open-source solution that can create AI model ID cards with a cryptographic proof that ties a specific model to a specific set of data and code, similar to secure hardware.
HackingBuddyGPT helps security researchers use large language models to discover new attack vectors. Security professionals can conduct more controlled hacking attacks with HackingBuddyGPT. The project offers valuable information that can help distinguish attack patterns generated by LLMs from those created by human operators. All data collected by the project is publicly available, allowing security researchers to use or analyze this data or improve their defenses.
However, as before, the main responsibility lies with the end user. It is necessary to be more careful with emails outside of known contacts, especially if they have attachments through which attackers distribute malware.
Will there be more such malicious models?
WormGPT and FraudGPT were created around the same time — summer 2022. Since then, similar fraudulent AI models have appeared extremely rarely and quickly disappeared, which is more evidence of the intention of the fraudsters to “scam” other fraudsters. According to Dr. Anna Mysyshyn, an expert on artificial intelligence regulation, cybersecurity, and digital governance, the lack of publicity of the hackers does not indicate a decrease in their activity.
“The lack of public announcements does not necessarily mean the lack of development of new tools. Cybercriminals can operate more covertly to avoid detection and countermeasures. There are also many private forums and closed communities where there are new programs and tools that are not distributed to avoid the attention of experts and authorities that would investigate such tools and create effective countermeasures,” Ms. Anna told dev.ua
This view is confirmed by by the European Police, which notes that cyberattacks will become larger and more effective due to the use of artificial intelligence, while multi-stage blackmail and data theft will remain the main threats.