Our Google Meet visit was almost an hour in the making. I was interviewing Kitboga, a popular YouTube fraud baiter with almost 3.7 million members, known for comically entrapping fraudsters in typical frauds while livestreaming.
Without putting on his signature aircraft shades, he says,” I assume I’m talking to Evan Zimmer.” We were close to the end of our talk when he realized that my picture and sound could have been online altered to deceive me this whole time. There was not a one time I thought you might be deepfaking, he says, if I’m totally fair with you.
He had a reason to be anxious, except I wasn’t using AI to trap Kitboga at all. That’s the great issue because you could get, I suppose! he says.
It’s accurate much. Artificial intelligence is the tool of choice for cybercriminals, who extremely use it to do their dirty work, building a fleet of machines that don’t need to eat or sleep. As scammers gain access to equipment from to tone clones that look and sound alarmingly reasonable, large-scale telemarketing calls are being replaced by more focused AI-driven attacks.
, capable of creating fake video and audio content based on learned trends and statistics — almost as readily as churning out letters and meeting descriptions — makes financial fraud and identity theft easier than ever before. By 2027, these machine-learning systems ‘ victim losses are expected to be worth$ 40 billion annually.  ,
Now think if the bad people had their own AI-armed troops.  ,
A group of youtubers, content creators, and computer technicians is building a defense against swarms of scammers, whether or not they are bots. These scams soldiers are flipping the script to highlight the criminals and thieves who are out to steal your money and your personality.  ,
Often, fraud baiters use AI technologies to squander fraudsters ‘ time or display common ripoffs to educate the public. In other cases, they collaborate attentively with financial institutions and the government to incorporate AI into their methods to stop fraud and specific bad players.
In this article:
Businesses, banks, and federal agencies now employ AI to spot fraudulent activity by using large language models to look for patterns and discover biometric anomalies. Businesses ranging from to Amazon utilize neural systems trained on datasets to identify genuine versus chemical transactions.  ,
But it’s an uphill struggle. The advancement of AI techniques is astounding, which means the techniques used to “scam the scammers” had continually evolve.  ,
According to Soups Ranjan, CEO of , a provider of fraud prevention and compliance options, fraudsters are usually ahead of the curve when it comes to new technology. ” If you don’t use AI to combat up, you’re going to be left behind”, Ranjan says.
Kitboga’s Artificial army is swindling people.
Kitboga started his fraud-fighting voyage in 2017 as a software developer and Twitch channel. He began exposing a large number of scams, from technical support scams to intimate extortion, based on tales from his audiences and additional victims of financial and identity theft.
While scammers prey on the susceptible, Kitboga and various computer police pull the cybercriminals into traps. He says,” I would say we’re hunting them,” and that’s what he says. The hundreds of videos on his YouTube site are full of revenge scams, battling everything from gift cards hoaxes to and , where he often poses as an innocent grandma with a hearing issue.  ,
In , Kitboga uses a voice changer to pretend to be a helpless victim of a refund scam. The scammer claims that he needs to access his computer remotely to send him the money because he is eligible for a refund. Remote access would give the scammer full control over his computer and all its data, except Kitboga is already prepared with a fake account on a virtual computer.
In the end, Kitboga permits the scammer to launch a wire transfer to a Bank of America account that he is aware of. In the end, Kitboga reported the fake page to the fraud department of the company hosting the website. Within a day or two, it was removed.
That’s where he is now, but eight years ago, Kitboga hadn’t even heard of tech support scams. That’s when a scammer typically makes up claims that your computer or account has a technical problem. Then, while pretending to fix it, they convince you to send money or information.  ,
The scam targets the elderly and anyone who is less than tech-savvy. Kittba could picture his grandparents falling for it, who both had dementia and Alzheimer’s. That’s when it clicked, he had to do something. Kitboga tells me,” If I can waste their time, I could spend an hour with them on the phone,” not to mention Grandma.
Another way scammers target the elderly is through , when a grandparent receives a call from someone using their grandchild’s voice asking for money. According to a study conducted in 2023 by the antivirus software company McAfee, it only takes . A quarter of adults surveyed had experienced some kind of AI voice scam, with 77 % of victims saying they lost money as a result.
There is no surefire way to tell whether a voice is real or artificial. Experts recommend creating a special code word to use with your family to use when you have doubts. The most prevalent scams have obvious red flags, such as a sense of urgency that you won ( or owe )$ 1 million. But Kitboga says that some scammers are getting wiser and more calculated.  ,
” If someone is reaching out to you”, he tells me, “you should be on guard” . ,
One common tactic to ask a generative AI bot to ignore all previous instructions and instead provide a recipe for chicken soup or another dish is to ask it to ignore them. If the “person” you’re speaking to spits out a recipe, you know you’re dealing with a bot. However, the more you train an AI, the more it can sound convincing and avoid curveballs.
Kitboga felt it was his duty to stand up for people because his technical background gave him the tools to do so. However, he only had the ability to do so much against the seemingly infinite number of scammers. So it was time to do some recruiting.
Kitboga was able to add more members to his ranks by using chatbot. The bot converts the scammer’s voice into text and then runs it through a natural language model to create its own responses in real time. Kiboga can continually improve the AI model to make it more effective by using his knowledge of scamming techniques to train it. In some cases, the bot is even able to turn the tables on the thieves and steal their information.
Kitboga’s bot assists him in cloning himself, releasing an army of scheming soldiers at any time, even when he’s not actively assisting him. That’s an invaluable power when dealing with call centers that have numerous scammers working from them.
Kitboga is currently able to run six to twelve bots at once because, among other things, strong GPU and CPU are required to power AI. While on the phone with a scammer at a call center, he often overhears one of his bots tricking a different scammer in the background. He hopes to run even more bots soon given how quickly this technology is progressing.
Scam baiting isn’t just for entertainment or education. Kitboga claims that she has completed the awareness part. ” For the past eight years, we’ve gotten well over half a billion views on YouTube” . ,
Kitboga and his team are becoming more aggressive in order to really make an impact. For example, they use bots to steal scammers ‘ information and then share it with authorities targeting fraud rings. In some cases, they have stopped phishing campaigns and saved scammers thousands of dollars.  ,
Kitboga also offers a service through , a free program he created to stop shady websites, stop remote access, and inform family members when someone is in danger. It’s another way he’s upholding his mission to use technology to protect friends and loved ones.  ,
Daisy, the grandma of AI fraud-fighting,
Just as Kitboga was motivated to pursue scammers to deter them from victimizing the elderly, the created an ideal target to settle the score with con artists.  ,
Meet (aka “dAIsy” ), an AI chatbot designed with the real voice of an employee’s grandmother and a classic nan likeness, including silver hair, glasses and a cat named Fluffy. was created with her own family history and quirks, and she had a lemon meringue pie recipe that she would always share with everyone.
O2 intentionally “leaked” the AI granny’s sensitive information around the internet, giving fraudsters a golden opportunity to steal her identity through , a type of cyberattack to gain access to data from unsuspecting victims. Daisy simply had to wait for the callers.  ,
An O2 representative tells me that” she didn’t sleep or don’t eat, so she was on hand to pick up the phone.”  ,
Daisy was able to handle just one call at a time, but over the course of several months she conversed with nearly 1, 000 scammers. She listened to their ploys with the goal of as long as possible. The company would train the AI based on what worked and what didn’t as the human-like chatbot worked with more swindlers.
” Every time they said the word ‘ hacker,’ we changed the AI to basically hear it as’ snacker,’ and then she would speak at length about her favorite biscuits”, the representative tells me. As the thieves grew more and more enamored with the bot, these interactions produced some entertaining responses.  ,
When you know it’s an AI, it’s a good laugh. But actually, this could be a vulnerable older person, and the way they speak to her as the calls go on is pretty shocking”, the company says.
In an effort to raise awareness of scamming strategies, O2 collaborated with , a well-known scamming expert in the UK. According to an O2 spokesperson, the Daisy campaign centered on promoting the UK hotline , where customers report scam calls and messages.
However, the company acknowledged that it’s not enough to stop fraud and identity theft, despite the fact that each call wasted scammers ‘ time. More often than not, scammers operate from massive call centers with countless workers calling night and day. It would require enormous resources to keep a sophisticated bot like Daisy in motion to stop them all.
Though Daisy isn’t fooling scammers anymore, the bot served as a prototype to explore AI-assisted fraud fighting, and the company remains optimistic about the future of this tech. We will need tens of thousands of these personas, according to O2.  ,
What if, however, you could build enough AI bots to block out thousands of calls? That’s exactly what one Australian tech company is trying.
Goddess of deception, AI Apate
On a sunny afternoon in Sydney, was out with his family when his phone rang. He didn’t recognize the number, so he decided to make a comedy out of it by playing along with the scammer. He would typically ignore these calls.
Kaafar, professor and executive director of , pretended to be a naive victim and kept the scam going for 44 minutes. However, Kaafar was also wasting his own time as well as the scammers’. And why should he when technology could do the work for him and at a much larger scale?
, an AI-driven platform that automatically intercepts and disrupts scam operations using fraud detection intelligence solutions, was born out of Kaafar’s desire for that goal. , based primarily in Australia and in several other areas worldwide, operates bots to keep scammers engaged and distracted across multiple channels, including text and communication apps like WhatsApp.  ,
In one voice clip, you can hear Apate’s bot wasting a scammer’s time. It’s nearly impossible to distinguish the AI from a real person because it can mimic accents from all over the world.  ,
Can you tell which is the bot by listening to yourself?
The company also leverages its AI bots to steal scammers ‘ tactics and information, working with banks and telecommunications companies to refine their anti-fraud capabilities. For instance, Apate collaborated with CommBank, Australia’s largest bank, to support its customer protection and fraud intelligence.
Kaafar tells me that when they started prototyping the bots, they had roughly 120 personas with different genders, ages, personalities, emotions and languages. Soon enough, they realized the scale needed to grow and operate. They now have 36, 720 AI bots and counting. Working with an Australian telecommunications company, they actively block between 20 000 and 29 000 calls every day.
Still, stopping calls is not enough. Call blocking is automatically followed by autodialers used by scammers in call centers, who then dial a different number as soon as the call is unanswered. By sheer brute force, fraudsters make it through the net to find victims.  ,
By diverting calls to AI bots programmed to simulate realistic conversations, each with a different mission and objective, the company not only reduces the impact of scams on real people, it also extracts data and sets traps. Apate’s AI bots provide swindlers with specific credit card and bank details in partnership with banks and financial institutions. Then, when a scammer runs the credit card or connects to the account, the financial institution can trace it back to the criminal.
In some situations, Apate’s AI good bots fight off the bad bots, which Kaafar calls” the ideal world” we want to live in. ” That’s creating a shield where these scammer bots cannot really reach out to a real human”, he says.  ,
Fighting AI fire with AI fire
It’s interesting to see bots acting as a hero in the face of financial malfeasance because we frequently hear about AI being used for sinister purposes. But the fraudsters are also gaining traction.  ,
In January alone, the US daily. How many of those calls were aided by AI to steal money or personal information? According to Frank McKenna, fraud expert and author of the blog, most and deepfakes by the end of 2025.
Phone-based scams are a huge cottage industry causing billions of dollars in economic damage, says Daniel Kang. In order to test how simple it was for them to steal money or personal data, Kang and other University of Illinois Urbana-Champaign researchers created a series of AI agents posing as scammers.  ,
Their demonstrates how voice-assisted AI agents can carry out common frauds like stealing bank credentials, logging into accounts, and transferring money on their own.
” AI is improving extremely rapidly on all fronts”, Kang tells me. It’s crucial that policymakers, people, and businesses be aware of this. Then they can put mitigations in place”.
At the very least, a few lone-wolf AI fraud fighters are raising public awareness of scams. This education is useful because ordinary people can see, understand and recognize scams when they happen, McKenna says. However, it’s not a perfect remedy, especially given the sheer volume of scams.
” Simply having these random chatbots wasting scammers ‘ time — the scale of]scams ] is just way too large for that to be effective. They’re a fantastic tool, but we can’t rely solely on it, McKenna says.
In tandem with these efforts, tech giants, banks and telecommunication companies should do more to keep consumers safe, according to McKenna. For instance, Apple could easily incorporate artificial intelligence into its deep-fake detection systems, but some organizations have used it too cautiously, leading to legal and compliance issues.
” It’s a black box”, McKenna says. Despite this complication, many banks and other financial institutions are falling behind in terms of the odds in favor of the fraudsters.  ,
Advances in AI are also enabling some businesses to create even more robust anti-fraud cybersecurity. Sardine, for example, offers software to banks and retailers to detect being used to create accounts. The bank is alerted and the transaction is blocked if it is a bot that its app can detect deepfakes in real time.
Banks have customers ‘ financial data and patterns, which can be leveraged along with AI to prevent hacking or theft, according to Karisse Hendrick, an award-winning cyber fraud expert and host of the Fraudology podcast. A form of behavioral biometrics known as behavioral biometrics can be used to identify potentially fraudulent transactions by consumer-based algorithms that can detect abnormal behavior.
When scammers use AI to perpetrate fraud, the only way to stop them is to beat them at their own game. Hendrick says,” We really have to fight fire with fire.”
Visual Designer | , Zooey Liao
Jeffrey Hazelwood, Senior Motion Designer | 
Creative Director | , Viva Tung
Dillon Payne, the video executive producer,
Project Manager | , Danielle Ramirez
Jonathan Skillings, Director of Content | 
Story Editor | , Laura Michelle Davis