Spike 200 % from Dark Web Mentions of Malicious AI Tools

image

According to a study by threat brains firm Kela, the number of complaints about jailbreaks and the use of destructive AI tools on the cybercrime underground rose in 2024.

The company’s new research, 2025 AI Threat Report: How Scammers Are Weaponizing AI Technology, was conducted annually.

It found a 52 % increase in discussions about jailbreaking legitimate AI tools like ChatGPT, and a 219 % increase in mentions of vile AI techniques and techniques.

The first relates to what Kela refers to as “dark Iot” tools, while the latter refers to ways to circumvent the handrails built into such systems in order to carry out malicious activities.

These are either jailbroken versions of publicly accessible generative AI ( GenAI ) tools created using custom open source large language models ( LLMs), or they are typically provided as a service on the cybercrime underground.

WormGPT, for instance, is based on a GPT-J LLM that is specifically designed for business email compromise ( BEC ) and phishing.

These “dark AI tools have evolved into AI-as-a-Service ( AIaaS), offering cybercriminals automated, subscription-based AI tools, allowing them to generate any malicious content,” according to the report. This lowers access barriers, making robust attacks like spoofing, deepfakes, and fraud scams possible.

Such devices are so common and in demand that some threat actors con adolescent audiences with false versions, Kela continued.

The seller pointed out that threat actors are also utilizing LLM-based GenAI equipment to:

  • Boost the style of phishing/social engineering, including through algorithmic audio and video, and automate and improve it.
  • Automate vulnerability analysis and scanning in order to accede the attack cycle ( pen testing ).
  • Improve the development of malware and other malware, including infostealers and malware.
  • Use algorithmic tools to override verification checks to automate and optimize identity fraud, including using automated and optimized tools.
  • Automate additional cyber-attacks, such as login stuffing, password cracking, and DDoS.

The digital threat landscape is changing dramatically, according to Yael Kishon, Kela’s lead researcher for AI product research.

“Cybercriminals are not just using AI; they are also creating whole subterranean areas of the system with AI-powered cybercrime.” To overcome this growing threat, organizations must adopt AI-based threats.

Leave a Comment