A brand-new record from danger intelligence firm , which highlights how quickly cybercriminals are adopting AI techniques and approaches, has increased by 20 % through 2024.
The findings were based on KELA’s 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology, which analyzed data from its intelligence-gathering system, which tracks and analyzes crime underwater communities, including , forums, Telegram channels, and risk professional engagement.
The report also discovered a 52 % rise in AI jailbreak discussions over the past year in addition to the 200 % increase in mentions of malicious AI tools. It was discovered that Hazard actors were constantly developing AI booting strategies to circumvent security limitations in public AI systems.
It was discovered that cybercriminals were exceedingly making money off of so-called “dark Artificial tools,” including jailbroken models and specially created harmful applications like WormGPT and FraudGPT. The tools were created to reduce the core cybercrime activities, including financial fraud, malware development, and phishing.
The AI techniques lower the difficulty for less experienced intruders to carry out complex attacks at level by removing health restrictions and adding custom capabilities.
On the subject of phishing, it was discovered that threat actors were developing more advanced phishing campaigns, using generative AI to create compelling social engineering content, occasionally enhanced with deepfake audio and video to trick executives and deceive employees into granting phishing transactions.
Additionally, it was discovered that AI was accelerating the development of malware, causing the development of very deceptive ransomware and infostealers, which presented considerable challenges for conventional detection and response techniques.
The computer threat landscape is changing dramatically, according to Yael Kishon, KELA’s guide for AI product and research. Cybercriminals are not just using AI; they are creating complete sections of the underground ecosystem dedicated to using AI-powered cybercrime. To overcome this growing threat, organizations must adopt AI-based mechanisms.
KELA advises organizations invest in employee training, check evolving AI threats and tactics, and implement AI-driven security measures like automatic intelligence-based red teaming and adversary emulations for relational AI models to combat the rising AI-powered digital threats.
Image: Revel/Silent Angle
We value your assistance, which keeps the material FREE, and it’s important to us.
Our goal is to provide free, thorough, and related information with just one visit.  ,
Join the community that includes more than 15, 000# CubeAlumni stars and experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many others.
THANK YOU