5 nefarious techniques hackers are using conceptual AI | PCWorld
Artificial intelligence ( AI ) can be a force for good in the future, as can be seen from the way that it is being used to advance things like . But what about the fact that it causes evil?
The thought that somewhere out there, there’s a James Bond-like criminal in an armchair stroking a cat and using conceptual AI to steal your PC may seem like dream but, quite frankly, it’s not. Computer security experts are now scrambling to prevent millions of threats by hackers that have used conceptual AI to steal PCs, steal money, credentials, and data, and, with the rapid proliferation of new and improved AI tools, it’s just going to get worse.
The types of cyberattacks that hackers employ are not necessarily well-known. They’re just more prolific, sophisticated, and effective now that they have weaponized AI. Here’s what to look out for…
AI-generated malware
Next time you see a pop-up, you may want to hit Ctrl-Alt-Delete real quick! Why? Because hackers are using AI to create malware that appears in browsers like there is no tomorrow.
Security experts can determine the type of malware that generative AI has written by looking at its code. According to in the journal Artificial Intelligence Review, malware created by AI tools is quicker to produce, more adept at avoiding security platforms, and more effective at avoiding handwritten code.
One instance of malware that HP’s threat research team discovered is one that is highlighted in its September 2024 Threats Insights Report. The business claimed to have found malicious code hidden in an extension that hackers used to eavesdrop on user attempts and direct them to fake PDF tools.
SVG images also revealed malicious code that could launch infostealer malware, according to the team. The malware in question had code that read “native language and variables that were consistent with an AI generative tool,” which is a clear sign of its AI origin.
Evading security systems
It’s one thing to create malware using AI tools, but it’s quite another thing to maintain its effectiveness by obstructing security. Hackers are aware that cyber security firms can quickly identify and remove new malware, so they’re using Large Language Models ( LLMs) to obfuscate or modify it.
AI can be used to combine previously undiscovered malware with existing malware or create entirely novel variants that security monitoring systems cannot identify. According to cybersecurity experts, doing this is most effective against security software that can identify known patterns of malicious activity. In fact, it’s actually quicker to do this than create malware from scratch, according to Palo Alto Networks Unit 42 researchers.
The Unit 42 researchers . They rewritten 10,000 malicious JavaScript code variations of known malware using LLMs, which both had the same functionality as the original code.
According to the researchers, these variants managed to decipher information using LM detection algorithms like Innocent Until Proven Guilty ( IUPG). They came to the conclusion that enough code transformations could be used by hackers to “degrade the performance of malware classification systems” so as to avoid detection.
Because of their cleverness, hackers may find two other types of malware even more alarming to evade detection.
Dubbed “adaptive malware” and “dynamic malware payloads” these types are able to evade security systems by learning and adjusting their coding, encryption, and behavior in real time to bypass security systems, cybersecurity experts say.
These types are older than LLMs and AI, but generative AI, according to their explanations, is making them more responsive and effective.
Stealing data and credentials
According to cybersecurity firms, AI software and algorithms are also being used to more effectively spoof user passwords and logins and access accounts without authorization.
Cybercriminals generally use three techniques to do this: credential stuffing, password spraying, and brute force attacks, and AI tools are useful for all of these techniques, they say.
Predictive biometric algorithms are enabling hackers to track down users who type passwords and making it easier to hack into large databases containing user data.
Additionally, scanning and analyzing algorithms are deployed by hackers to quickly scan and map networks, identify hosts, open ports, and identify the software in operation to discover user vulnerabilities.
Brute force attacks have long been favored cyberattack techniques by amateur hackers. In this attack type, a large number of businesses or individuals are subject to trial-and-error cyber-attacks in the hope that only a few will be able to escape.
Traditionally, only one in 10, 000 attacks is successful thanks to the effectiveness of security software. However, the popularity of password algorithms that can quickly analyze large data sets of leaked passwords and more effectively direct brute force attacks is making this software less effective.
Cybersecurity experts warn that algorithms can also automatically automate hacking attempts made on multiple websites or platforms at once.
more successful phishing and social engineering
Hackers are utilizing conventional generative AI tools like Gemini and ChatGPT as well as their dark web counterparts WormGPT and FraudGPT to imitate people’s writing and language styles to customize victims ‘ social engineering and phishing attacks.
Hackers are also using AI algorithms and chatbots to harvest data from user social media profiles, search engines, and other websites ( and directly from the victims themselves ) to create dynamic phishing pitches based on an individual’s location, interests, or their responses.
With AI modelling, hackers can even predict the likelihood their hacks and scams will be successful.
Hackers are also using smart bots to learn from attacks and alter their behavior to increase their success, once more.
According to research, hackers ‘ phishing emails created with AI software are more effective at deceiving people. One reason is that they typically involve fewer red flags, such as spelling mistakes that are obvious or grammatical mistakes that are overlooked.
Singapore’s Government Technology Agency ( GovTech ) demonstrated this at the Black Hat USA cybersecurity convention in 2021. At the convention, it reported on in which spear phishing emails generated by OpenAI’s ChatGPT 3 and ones written by hand were sent to participants.
The experiment revealed that the participants were much more likely to open the hand-generated emails than the chatGPT-created ones.
Science fiction-like impersonation
When you start talking about deep-fake videos and the use of voice-clones, the use of generative AI for impersonation gets a little science-fictiony.
Even so, hackers are using AI to impersonate and voice ( also known as voice phishing or vishing ) of people who are known to victims in videos and recordings to perpetrate their defraudations.
A finance employee was tricked into paying out$ 25 million to hackers who used deep-fake video technology to pose as the company’s chief financial officer and other colleagues in a prominent case from back in 2024.
These aren’t the only AI impersonation techniques, though. In our article,” AI impersonators will wreak havoc in 2025.” In this article, we discuss eight ways that AI phoners are attempting to defraud you.