Audio: How an anti-fraud business fights algorithmic fraud

Financial institutions are looking to deepfake detection solutions in their fight against the growing threat of generative AI-driven fraud. 

The growing is expected to be a $15.7 billion industry by 2026, according to consultancy firm Deloitte. 

AI voice fraud detection startup Herd Security is one tech provider that banks are channeling to reduce targeted attacks against their organizations and clients, Brandon Min, co-founder and chief executive of Herd Security, tells Bank Automation News on this episode of the “The Buzz” podcast.  

Herd Security, launched in 2023 by Min and his co-founder and chief technology officer Greg Bates, “can detect the presence of AI on any live call or previous audio-based recording with less than 10 seconds of audio,” Min says.  

Herd Security will demonstrate its technology at in Nashville, Tenn., on March 3. 

Listen to this episode of “The Buzz” podcast as Min discusses how banks can layer in deepfake detection tools to reduce fraud.  

 for Bank Automation Summit 2025, taking place March 3-4 in Nashville, Tenn. View the full event agenda .  

The following is a transcript generated by AI technology that has been lightly edited but still contains errors.

Whitney McDonald 12:42:15
Whitney, hello and welcome to The Buzz a bank automation news podcast. My name is Whitney McDonald and I’m the editor of bank automation News. Today is February 6, 2025 Joining me is Brandon min, co founder and CEO of startup herd security. He is here to discuss how herd securities technology is using AI to identify and fight voice based fraud at financial institutions heard security will demo their technology in March in Nashville at Bank automation summit 2025 visit bank automation summit.com for more information about the summit and the demo challenge. Thanks for joining us. Brandon,Brandon Min 12:42:52
yeah, of course. And thanks again for having us. Whitney, yeah. My name is Brandon min. I’m the co founder and CEO of herd security. My background starts about eight years ago I jumped into the cybersecurity world. I’d say the biggest company I was a part of that was a startup. Was a company called Duo Security that specialized in multi factor authentication. Was part of the journey of that company into getting acquired as part of Cisco now today, and from there, I had really gotten a sense of how organizations treat their users in terms of their user based security. And what I mean by that is, how well do users understand cyber security and best practices as well as what their role is in terms of protecting the organization as a whole? And that always stuck with me. Of course, multi factor authentication is a very in a sense, a personal thing, because it’s on everybody’s phone, and from there, it my time at duo kind of shaped the ideas of building cyber security based tools that are focused on user either awareness or protection overall from that standpoint, so fast forward a few years, because it’s All blur past the pandemic and everything certainly and I we started heard security in late 2023 being heavily focused on getting into user specific security. That led us into this portion of AI generated content and deep fake based protection. Great.Whitney McDonald 12:44:33
Well, thank you again for being here, and let’s take that a step further. Why don’t you tell us a little bit more about herd security? Um, kind of give me a little bit of insight into what exactly you’re solving for.Brandon Min 12:44:45
Yeah, yeah. And, but before I jump into it, I’ll, I’ll set the ground later context of I’ll be talking and using the word social engineering a lot, so I think it’s a common phrase, but just so everyone’s on that same page. Social engineering is any type of attack against a organization that targets users. So the most common is a a fake phishing email, something to get somebody to give up, something in order to for an attacker to gain access into an organization. Typically, that’s a account phrase, password nowadays, multi factor authentication credentials, etc. So I’ll be using that phrase quite a bit, but specifically heard security helps banks combat voice based social engineering attacks to primarily prevent wire fraud and account takeover, and this idea and problem has been shaping, of course, as generative. AI has become such a huge, powerful tool and issue. Don’t want to completely knock it by saying it’s an issue, but it’s a it’s presented issues to many different organizations that we that we work with, across the board and traditionally, as I said, social engineering has been focused on, very commonly around email based security and phishing emails. I’m sure almost everyone has either seen a really poorly written phishing email or has been tricked by a maybe and even an internal phishing awareness campaign and clicked on it and gotten enrolled into some extra security awareness training I have as well. It is has happened to me once in my life. I’m not proud to say that, but that is true. But with specifically with the new technology and generative AI, we’re seeing the ability to create a next level base of content across the board for social engineering, and that includes more in depth emails in generative AI text and generating synthetic Voice, building AI agents that can mass produce wider attacks and replicate attacks at a faster rate, so that one hacker in a in a basement, somewhere in the middle of nowhere, is actually able to go after very large enterprises across the board now, because of the repeatability that AI presents to itself, but. So of course, there’s a myriad of different tools, both paid and open source, now on the market, and that allows for fraud to really be everywhere and generated from anybody. And I believe it’s as much as AI has leveled the playing field for everyday employees or everyday workers, just in terms of getting certain tasks done, etc. It also has leveled the playing field for hackers to be able to produce very sophisticated attacks across the board. So that got us into really focusing on this next level of voice based specific social engineering attacks. And the most common example is getting a phone call that is someone impersonating a either a person or an account or a customer, and trying to take bank information in order or initiate a wire fraud or beat voice verification based platforms, those are typically some of the most common that we see.

Whitney McDonald 12:48:20
Yeah, a couple of things to break down there, of course, with generative AI, one of the things that you mentioned is, is the scale. You know, you’re not just one hacker like you mentioned in a basement that can, you know, do one scheme and move along, but you can really go after these larger enterprises with this sophisticated technology that’s, you know, right at everybody’s fingertips. So maybe you could talk through a little bit about what the conversations look like when banks approach heard, what are they trying to solve for? What are they seeing? What are the problems that they’re coming to you with that? Hey, I have this issue over and over again. How do we eliminate that, or watch for that, or monitor that? You know, more of a proactive than reactive take at fraud? Maybe you can talk us through what those conversations with bank clients look like. Yeah,

Brandon Min 12:49:09
absolutely. I think it’s it’s primarily been focused on two sets of different types of banks, and I would say, we’ve come across teams that are very proactive about this problem, have read about in the news and understand that this will become a huge problem, I will say, not just in banking, in every industry. Sadly, any kind of cybersecurity threat is typically a reactive approach for most organizations, not proactive. But I in the proactive based conversations, many banks that come to us have essentially said that they have gotten complaints from their customer base that people have called them, impersonating the bank, or they’ve actually had small businesses get taken over and try to initiate specific these hackers are trying to initiate specific wire based fraud against the bank, impersonating a specific hacker. And I’d like to take that a step further and say the how these attacks look are really in two different fashions. Is one is the usage of synthetic voice with AI to essentially impersonate a specific person’s voice. So I could take your voice, or somebody could take my voice from this podcast now and essentially use that with about really you’ll need to sample about five to 10 seconds and be able to directly impersonate someone’s voice and live transpose that onto a call. So let’s put ourselves in a, you know, from an internal standpoint, I’m the CEO of a, you know, Bank A, and CFO of bank, a calls me, and it sounds just like him. They were having a conversation. It sounds very much about, you know, hey, we need to wire some money to a specific vendor, you know, whether, whatever type of conversation that is, and it sounds just like the person that we’re talking to. And so some of the original banks that came in contact with us, we’re actually hearing that we’re actually community sized banks where tellers were getting impersonated and talking to business based customers in their in their customer base, and they were recognizing the voice of the teller, even though they didn’t know necessarily that person by name, etc, they had an understanding of, I’ve heard this voice before. I trust this voice, and they were giving away very crucial account information. And what those hackers were doing was then turning that back to the community bank and impersonating the customer back and trying to initiate a wire fraud, etc. You know, of course, some have fallen for it. Some haven’t. And it’s it can be very powerful in terms of how that looks and the numbers. Of course, on the rising side, I believe it’s over 700% of deep fake based attacks have gone up in 2023 2024 numbers are still coming out, and that’s. Sense. But we estimate those to be even higher, and specifically against financial institutions, because it is so typically two areas. Is one that they have very simple to contact contact centers or some type of way to get access to voice communication. And two, it is very simple to move money in those organizations, because they’re moving money the most in that sense. So overall, that’s kind of the first area on this AI generated synthetic side, and the second side is just general voice fraud. So there are some banks that are so large that we’ve talked to where you wouldn’t know your Teller’s voice or name, necessarily, they could be using AI to hackers. Could be using AI to actually copy specific tone or match certain accents in certain parts of the US. So we had a specific bank that was getting attacked from somewhere in the Middle East, and those users, or I’m sorry, those hackers, were impersonating southern based accents, because this was somewhere in the deep south, et cetera. And of course, that’s very accessible now, but it’s still a different form of AI based attack. But we’re also prepared for the types of attacks that don’t use AI either. So they have were pushing for credit card based information, pushing for account based information, etc, and we’re able to actually help organizations still build risk profiles around how we’ll say pushy a hacker could be versus a customer in that sense.

Whitney McDonald 12:53:56
Now maybe we could, on the other hand, talk a little bit about, you know, the how do you, you know? How does heard fight this? How do you monitor for this? Obviously, the examples that you’ve been giving are, I mean, it’s a sophisticated approach. Like you said, you don’t need that much of an audio bite to get that you know, trusted voice that you know, or you know, have something that’s recognizable and on both sides, like you mentioned, it could be a CFO, or it could be the client side as well. How does the technology behind heard work? What are you monitoring for? Talk us through the tech. How does a bank leverage the tech? Get us through the how? Yeah, absolutely.

Brandon Min 12:54:36
Well, I’ll stop. I’ll start by the core of the tech, which is really our detection based engine. And so in that sense, at a face value, we’re able to detect the presence of AI on any live call or previous audio based recording with less than about 10 seconds of audio. And the key here is that we can do this without any baseline training. So there’s a lot of tools out there that would come to a bank and say, Hey, we have to work with you for about maybe a month or two to establish some type of voice training for our AI in order for it to begin working. That goes out the window with our product, we actually can implement within 30 minutes and be able to begin working immediately, in that sense. And so that’s one of the proprietary and really advantages, portions, advantageous portions of our product, excuse me, that are you’re not really getting much downtime there integrations with our and typically, what we’ve done is as part of that core tech, we wanted to be able to allow banks to integrate this with any type of voice communication that they do, or any kind of voice communication that they’re worried about in the future as well. So most commonly, we’re seeing it with Void based systems, Cisco finesse, AWS connect, etc, where we can directly integrate our technology into inbound based call centers or contact centers, customer support lines, whatever you’d like to call it, and be able to produce a score of AI based risk within the first 10 seconds of any call. And the beauty of this is we don’t need to a couple different things. Is one, we don’t need to change the contact Center’s flow. We just added into part of the conversation, they can continue to go through the same verification based processes that they already do, but they’re adding this extra quick layer of is there AI presence on this call or not immediately? And say that score is relatively high, let’s say 98% 95% etc. The bank can choose what they want to do after that. I don’t we have a bank we work with specifically where they I’m not going to give away their exact process, but let’s say they have a five step process in order to do verification. So what they were able to do is add the. Portion in to test for AI presence without actually having to change that five step process. So on the customer side, they don’t see any difference, and on the caller side, the timing is still the same, because you don’t need to wait for any type of verification. You just go through your flow, get the person talking, and we will give that response. And then what they’ve instructed people to do is, what if it’s over 80 90% on the call, specifically, they actually go through another set of verification steps. And if it’s 100% they say you need to either call back or go to one of our branches, etc. So we’re very Our motto is we don’t want to mess with the flow of a contact center. We want to give just be a part of it in order to protect the overall safety without ruining anyone’s day to day, or causing a lot of change management in that sense. So that’s the first way. And then the second way is, which is something very unique to us is we have built ways to protect mobile based devices as well, so iOS and Android across the board. And with that, that’s what helps with the internal based conversations a bit more the CEO CFO and executives that need protection from this type of from this type of fraud. And not only do we develop detection based technology for them to protect themselves, so that CEO can detect if CFO is somebody’s using CFOs voice as AI, we also are building tools to allow for CEO CFOs, etc, to protect their own voice. If they say, don’t recognize a number, they can actually turn on a synthetic voice for themselves in order to vet the call as they’re starting it before they so their voice can’t be stolen in that sense as well. So we’re trying to build as many preventative measures there as possible. But typically, most accounts that we work with are VoIP based systems, mobile devices for those two use cases. And then we’re eventually moving into video conferencing, like here, like we’ve, like, talked about from the audio based side as well.

Whitney McDonald 12:59:27
Yeah. So sounds like there’s definitely, you know, advancements being made as well. You know, different iterations growing as as the fraudsters keep up, you know, trying to keep up with the fraudsters just as much as you can keep up with already. What’s in action today. Now, really quickly. I also wanted to mention that you will be doing a live demo at our upcoming summit, the bank automation Summit, in Nashville, without giving too much away. And I know that you just talked through, obviously, the need, how the product works, all that good stuff. Maybe you can share a little bit about what attendees can expect from your live demo. What will they see?

Brandon Min 13:00:04
Yeah, yeah. Well, I mean, really down to the basics. Is everything I just talked about in that sense, because it can be shown in only a few minutes. And that’s the really, the beauty of it as well, is we’re, of course, not going to show a full implementation in that sense, but that’d be something, yeah, that would be something we’re not until AI could do that for us. I don’t know if we’re that good yet, but we we’d see it in the sense of, we have a we’ll use a VoIP based system. We’ll run a call from really a perspective of both sides that I talked about, AI based voice and non AI based, boy based voice, excuse me, and being able to utilize that in different ways to show different types of voice based attacks. And I think the main thing I want any of our audience to take away is not just what our solution can do, but really understanding the depths of this problem because it’s AI, is still something that we’re all getting used to. It’s still something that businesses are hopefully building ways to build proactively into streamlining their business or getting more efficient, etc, which I’m assuming, that’s why they’re at places like this conference. But at the end of the day, they’re building awareness around voice based social engineering and just how powerful it can be will be the main goal here. So I not only want to show how easy it is to build a sophisticated attack, which is what I will do, by really showing some of my old school ethical hacker based skills further I did only good, good guy hacking for for the record and and really building a basically, I would, I want to show how a hacker can put something together in less than two or three minutes, and then how sophisticated that can look without our product, and then how our product is actually able to catch this across the board. So yeah, excited to show it. And hopefully. Hey, hopefully I still remember some of my security analysts in the threat based skills.

Whitney McDonald 13:02:22
You’ve been listening to the buzz a bank automation news podcast. Please follow us on LinkedIn, and as a reminder, you can rate this podcast on your platform of choice. Thank you for your time, and be sure to visit us at Bank automation news.com for more automation news you.

Transcribed by https://otter.ai

Leave a Comment