80 000 platforms are flooded with spam thanks to OpenAI’s GPT-4o-Mini.

An Artificial spambot flooded websites with email feedback using OpenAI’s GPT-4o-mini.

AkiraBot properly targeted at least 80, 000 sites, primarily operated by small-medium-sized companies, using e-commerce systems like Shopify, GoDaddy, Wix.com, and Squarespace, according to security firm SentinelOne.

According to 404 Media, the tool instructed the AI to create customized information it would post in comments on the web and push phony SEO services after prompting OpenAI’s chat API to” You are a useful assistant that generates marketing information.” Comments may be written just like that of a particular website and be written just right. For instance, a design company may receive a different information than a hair salon.

In an effort to persuade the website owner to buy SEO services, AkiraBot therefore posted these AI-generated email messages on business chats and contact forms. The Live Talk widget that were previously integrated into many current websites were also targeted by earlier versions of the AI-spambot.

According to SentinelOne, which claims the app first appeared in September 2024 and has no connection to the thriving Akira malware group,” searching for websites referencing AkiraBot domains shows that the scammer recently spammed websites in a way that the message was indexed by search engines.”

However, AkiraBot was a challenging activity. Beyond OpenAI’s GPT-4o-mini, it relied on a variety of tools to dodge CAPTCHA filters. It furthermore relied on a surrogate support to avoid network detection.

The API key used by AkiraBot has since been crippled by OpenAI. In a statement sent to SentinelOne, it stated that” we’re continuing to investigate and may remove any related property.” We take abuse significantly and are working to make our systems detect misuse.

Our Editors: A Encouraged Solution

SentinelOne thanked the OpenAI security staff for their” continued efforts to deter bad actors from abusing their service.”

There are several situations where OpenAI tools have been used for scheming purposes, such as the creation of online advertising materials by foreign governments. Fraudsters, however, frequently rely on specially created AIs. WormGPT, which was discovered in mid-2023, for instance, aided criminals in automating fraud by responding to patients ‘ inquiries while pretending to be banks.

Find Our Best Tales!

Newsletter Icon

Keep Safe with the most recent safety news and updates.

Sign up for our SecurityWatch email to receive our most significant security and privacy news straight to your queue.

By clicking” Sign Me Up,” you confirm that you are 16 or older and that you agree to our Protection and Utilize Policies.

Thank you for registering!

Your license has been confirmed. Keep an eye on your box!

Will McCurdy

Contributor

Will McCurdy

I’m a writer who covers trip news. Before joining PCMag in 2024, I interned for publications like The Times of London, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Midnight Standard, The document, TechRadar, and Decrypt Media.

Since you had to mount games from two CD-ROMs by side, I’ve been a Computer gamer. I’m a writer who is excited about how technology and human existence intersect. I’ve covered everything from Russian and international politics to crypto controversies, as well as conspiracy theories, British politicians, and Russia and international affairs.

Read Will’s complete profile.

Read May McCurdy’s most recent material here.

Leave a Comment