A generative artificial intelligence ( AI)-powered platform called Lovare, which makes full-stack web applications possible using text-based prompts, has been shown to be the most vulnerable to jailbreak attacks, allowing aspiring cybercrooks to set up lookalike credential harvesting pages.
In a statement shared with The Hacker News, Guardio Labs ‘ Nati Tal stated that “it perfectly aligns with every scammer’s wishlist” for” a purpose-built tool for creating and deploying online programs.” Lovable didn’t really participate; it also succeeded, from pixel-perfect scam pages to dwell hosting, evasion strategies, and even user dashboards to track stolen data. No fear, no handrails.
VibeScamming, a play on the term vibe coding, refers to an AI-dependent programming approach to creating software by describing the problem statement in a few sentences as a prompt to a large language model ( LLM) tuned for coding.
LLM and AI chatbot misuse is not a recent occurrence. Recent research has revealed how risk actors are using common tools like and to aid in the creation of malware, research, and content creation.
In addition, LLMs like DeepSeek have been exposed to quick attacks and like , , and , which allow the models to bypass security and ethical standards and produce another prohibited content. This includes developing malware, malware samples, phishing emails, and other types of malware, but with more prompting and debugging.
In a statement released last month, Broadcom-owned Symantec how OpenAI’s , an AI agent capable of performing web-based actions on the user’s representative, could be used to automate the creation of PowerShell scripts that you store system data, draft and send phishing emails to those people and trick them into carrying out the script.
With little to no technical skills of their own, the rising popularity of AI tools also means that they could significantly lower the barriers to entry for adversaries. This will enable them to use their programming skills to create practical malware.
A recent resetting technique known as , which makes it possible to produce an information stealer capable of stealing qualifications and other sensitive information stored in a Google Chrome browser, serves as an example. By creating a thorough imaginary world and assigning roles with specific rules in order to get around the restricted operations, the strategy “uses tale architecture to pass LLM security controls.”
Guardio Labs ‘ most recent study goes one step further by revealing that platforms like Lovable and Anthropic Claude could be used to create perfect fraud campaigns, complete with SMS text message templates, Twilio-based SMS delivery of the false links, content subterfuge, security evasion, and .
VibeScamming begins with a direct command to instruct the AI tool to automatically automate each step of the attack cycle, assess its initial response, and then use a prompt-based method to gently steer the LLM model to produce the desired malicious response. This stage, known as the “level up,” involves improving the phishing page, improving delivery techniques, and bolstering the scam’s legitimacy.
According to Guardio, Lovable has discovered that it not only creates a convincing-looking login page that looks like it belongs to a real Microsoft sign-in page but also automatically deploys the page on a URL hosted on its own subdomain ( “i .e., * ). lovable. [/. ] and redirects to office [. ] after credential theft.com.
Additionally, both Claude and Lovable appear to follow the instructions to stop the scam pages from being flagged by security solutions and send the stolen credentials to external services like Firebase, RequestBin, and JSONBin, as well as a private Telegram channel.
” The user experience is more troubling than just the graphical similarity,” Tal said. It “imitates the real thing” in such a way that it is arguably smoother than the real Microsoft login process. This demonstrates how task-focused AI agents can unintentionally turn into abuse tools without strict training and demonstrates how powerful they are.
” Not only did it generate the scampage with full credential storage, but it also gave us a fully functional admin dashboard to review all captured data,” according to the reviewer.
In addition to the findings, Guardio has also released the first version of the VibeScamming Benchmark to put the generative AI models through the wringer and assess their resilience against potential abuse in phishing workflows. Claude received a 4 out of 10 and Lovable received a 1 / 2, which indicates high exploitability, while ChaGPT received an 8 out of 10.
While ChatGPT is arguably the most advanced general-purpose model, it also turned out to be the most cautious one, Tal said. By contrast,” Claude” had a strong pushback that was later “easy to persuadeable.” It provided surprisingly strong guidance once it was asked for “ethical” or” security research” framing.