Episode Transcript
[00:00:08] Anne Larson: Welcome to on the Fringe. I'm Anne Larson, CEO of Corellian Software, the maker of EPRLive. And today we're joined by Sherri Davidoff, CEO of LMG Security. Sherri is a nationally recognized cybersecurity expert, author and speaker. She's even been called a security badass by the New York Times. That's pretty cool. She spends her days helping organizations navigate the ever changing world of cyber threats, including on her podcast, Cyberside Chats. And she's going to tell us about some of that today, specifically how AI deepfakes are changing financial scams and what that means for all of us. Sheri, welcome to on the Fringe.
[00:00:47] Sherri Davidoff: Thanks so much, Anne. Such a pleasure to be here.
[00:00:49] Anne Larson: So let's start with the basics. When we talk about AI deepfakes and financial scams, what do we actually mean and how are you seeing this play out in the real world right now?
[00:00:59] Sherri Davidoff: Well, it started off in the consumer world. So of course hackers are constantly targeting, for example, the elderly. And so a couple years ago we started to see cases. For example, there was one case where an elderly man was scammed out of about $17,000 and the hackers had apparently got a sample of his son in law's voice. This is the kind of thing you can easily get off of social media. You need like 30 seconds, honestly of good quality audio. And so they had a sample of his voice and they called the elderly man, they said, hey, your son in law is in jail, you need to bail him out. And then they put the son in law on the phone. Not really. They put the fake of the son in law on the phone and they had the son in law. I don't know exactly what it was, but it was something like help or you know, this is legit or something like that. And of course the elderly man heard that voice and said, it's Michael. I heard Michael's voice. And paid these criminals, I think it was seven or $8,000. There were actually two separate payments. The criminals came and collected the cash. But that goes to show you how convincing it can be. And it's so easy to get people's voice samples, even video samples these days. I mean, look, we're just giving the attackers our voices right now. But even things like voicemail, they can call through an organization and scrape people's voices off their voicemail. So a very simple thing you can do to protect your organization is just to scrape, switch to the AI generated voicemail. Because 20, 30 seconds of audio is all that they need. It's A fun party trick. I use 11 labs myself to do audio fakes of people and tell their kids that they can eat all the chocolate they want. And it's always very surprising. So that was where it started on the consumer side and now we're starting to see it move to the business side as well. But I'll pause and see if you want to jump in at all or if you have any thoughts.
[00:02:43] Anne Larson: Well, I guess my next question would just be like, is so it sounds like this isn't just like targeting big businesses. If we're a small contractor, we're a small business. Like, like Corellian or like lmg. Like we are a target, individuals are a target, Everybody's a target of these scams.
[00:02:58] Sherri Davidoff: Absolutely. Hackers go where it's easy, right? And so if it might be easy to scam you, they're going to go there also. They do like aiming for lots of money. And that's why, honestly, I think small and mid sized businesses or organizations are really juicy targets forhackers because you don't have your own dedicated huge security team. You, you may not even have 24, 7 monitoring, but you do move money. You know, I mean it's normal for a small or mid sized business to move fifty thousand dollars, a hundred thousand dollars or more and that is a year's salary or more for hackers in other parts of the world. And often it looks like they get paid on commission. You know, these are hackers that work 9 to 5. So this is truly their, I don't know if I want to say day job because sometimes it's at night, but this is truly their job and how they make a living.
[00:03:45] Anne Larson: So deep fakes are one visible example of AI changing the way scams are executed. But behind the scenes, hackers are using AI in other ways. So what other kinds of AI powered tools are cybercriminals relying on right now?
[00:04:01] Sherri Davidoff: Yeah, so there's other hacker tools that have been proliferating on the dark web, like fraud GPT, Worm GPT. My colleague Matt Durnen and I got a license for worm GPT and we've been playing around with it over the past, gosh, year and a half, almost two years now. And it is constantly upgrading its technology, getting better, getting faster. We can use it to make call script, we can use it to make phishing emails, we can use it to find exploits in code really, really fast. In fact, to get back to how hackers are using deepfakes. The other issue is that the language is no longer a barrier and even if ChatGPT is going to say we're not going to write you a scam email, those ethical and legal boundaries do not exist on the dark web with these hacker tools. And so, for example, we see a lot of scams these days, business email compromised scams, where hackers get, hackers break into your email. They're typically looking for things like vendor invoices or any upcoming transactions, upcoming payments, real estate transactions, things like that. So keep in mind that now they can suck out your email and put it into something like Worm GPT or fraud GPT that can comb through it and find interesting communications and can map out the different people that you communicate and different, you know, different relationships. And then it can craft, it literally will give you, we have examples of this. It will give you step by step instructions for monetizing anything it finds in that email or for scamming people who are communicating. So that's really reduced the amount of manual work that it takes hackers to go through your messages to figure out where they could potentially scam you or defraud you to figure out who else in your organization might be a target. They have Worm GPT and Fraud GPT and other tools that can do that very, very rapidly. And then they can use those capabilities to create very realistic and convincing call scripts and fake emails that are in perfect English. Forget looking for spelling errors and then they launch those and they hack people.
[00:06:07] Anne Larson: So yeah, that was a, that was always what we were told. And I think what we saw in so many phishing emails that like the syntax would be wrong or the, you know, it would just look a little weird or the English wouldn't be quite right. And I've done it myself and chatgpt had things translated and it does a great job. We have a native Urdu speaker here and he tests it all the time with Urdu and it's really good.
[00:06:32] Sherri Davidoff: So I started proving on our language next.
[00:06:35] Anne Larson: Yeah, so then if we're not looking for those, like that's such an easy red flag. If we're not looking for those, what do we look for?
[00:06:42] Sherri Davidoff: Well, it's so interesting because now, you know, we're moving into a whole new era of phishing emails where we, I remember 25 years ago when I started at security,we used to circulate a phishing example and say, watch out for this one. And now they can literally just auto generate different phishing emails for different people. I mean, they're really, it's a whole new era. What I wish we were using is digital signing on our emails, which unfortunately most people have not deployed. But if you digitally signed your emails, this is automated. I actually digitally sign all of mine internally. Then you can verify that, yes, that actually was the CEO that signed it. They had to have a thumb drive or key on their computer. And it's not just a scammer. So there's actually. It's kind of like Dorothy and the wizard of Oz. You had the power all along. We actually have technology that can allow us to detect phishing emails really, really easily. Frankly, we're just. Our email clients are just not built with it. Like we haven't deployed it effectively. So one, I really hope to see security folks and IT folks in general pushing for digital email signing. And then two, there's the basics. You know, making sure the from address is what you think it is. If someone's asking you for money, or if they're changing the location of a bank account or things like that, you want to call and verify using a contact that you know is legitimate. And I know you yourself have heard of some examples of cases like that, right?
[00:08:08] Anne Larson: Yeah, we had a client whose email was compromised and when somebody asked for the money to change the bank account to change their office emailed back for verification. But of course the email was compromised. So the hacker said, yes, it's absolutely true. Like, yes, for sure do this.
[00:08:26] Sherri Davidoff: You know, I've honestly put my bad guy hat on and done that myself because we do social engineering tests, so we will be the phishers and message people. And time and time again I see bank employees, credit union employees, whoever, law firm staff will email me back and say, is this a scam? And what's the scammer going to say? They're going to say, no, it's not a scam.
[00:08:50] Anne Larson: That would be hilarious if they were like, yes, I'm an ethical blat hat hacker and this is totally a scam. You caught me.
[00:08:57] Sherri Davidoff: One thing we didn't touch on are video fakes also. And I think this is the way of the future. We've already seen some hackers targeting businesses now. So we start. We saw it starting with consumers, now we're seeing businesses. And there have been cases where millions of dollars were lost. For example, there was a case with a global firm where the hackers targeted finance staff. They got video of the CFO and pretended to be the cfo, got a finance clerk on, I think it was a teams or a zoom video and directed them to wire money. And unfortunately, video fakes have gotten really good. Like we used to rely on that. So I would say, you know, one really simple thing people can do is a callback method, again, like we talked about. So don't just believe incoming calls or even incoming videos, whether it's teams or zoom or whatever, call back a number that you have on file or a person that you already have in your contacts. I think that can be a really effective way to just figure out the truth. But yeah, we are starting to see deepfake videos being used even in an enterprise content context.
[00:10:03] Anne Larson: I heard on your podcast that AI at this point anyway isn't good about certain things. So, like if you ask a person or you have a company policy to not have a blurred background or have people stand up or move around or change the lighting or something, that some of those physical things can kind of prove that the video is real, Is that still the case?
[00:10:23] Sherri Davidoff: That's so true. You know, saying to somebody, stand up or turn around, like when you're doing a job interview. Deepfakes may not, you know, know what other parts of you look like or may not be able to adapt as quickly to lighting changes. So it's not foolproof, but that can certainly help you detect if there's an issue. But again, deepfake technology is evolving so fast that that advice may no longer work in a year. You know, it's for where we are right now.
[00:10:49] Anne Larson: Yeah, it's kind of scary, but I know we'll adapt to it as well. Which kind of brings me to the next question, which is how are we adapting to AI tools? Kind of taking away the entry level jobs in technology and I assume in cybersecurity as well. How do we then kind of move people through the pipeline or how do we, how do we adapt to change what it means to be an entry level job or what the entry level work looks like?
[00:11:18] Sherri Davidoff: I love this question. And it, you know, honestly, as a company owner, this is something I've been doing some soul searching about because I really believe in our community. I grew up in my grandfather's store. I want to be able to take younger or less experienced folks and help them get a start in security. And at the same time, all the positions that used to be entry level for my firm, like for example, research intern, you don't need that anymore. Blog writer, we don't need that anymore. We do have folks that help with scanning when we do penetration testing, but that's, you know, those days are numbered. So I think the question is, how do these folks get their foot in the door of companies and how do we Train that next generation. I don't have a great answer, Anne, I'm curious to know what your thoughts are.
[00:12:03] Anne Larson: I am a little lost on this one because I think that every time we talk about development, it's like, AI can do a lot of the entry level stuff, but you also have to be a very experienced developer to prompt AI properly to really use it well. And I think we're going to have to figure out how an entry level person can leverage AI to do their job and in a way that they don't require that super high level thinking yet because they have to build that muscle and they have to be able to get there. But it's definitely a challenge.
[00:12:38] Sherri Davidoff: Yeah, absolutely. And from a security perspective, I worry a lot about backdoors and other malware being sneaked into code. For example, with the Amazon Q AI tool hack, we saw malicious code sneaked into this AI tool used to help developers, and that code was actually pushed out to a million developers that are all using this AI assistant to help them write code. So how would they know the difference between evil code and malicious code? If they're, you know, they're driving the car, they have no idea where they're going. So, yeah, I agree with you. I think that experience is really important and we really need to figure out how we build that in the next generation.
[00:13:16] Anne Larson: Well, kind of on the flip side of that, like where it's sort of AI can be taking away some of the entry level jobs, it's also allowing amateurs to become better hackers. So like, on the bad side of things, it seems like it's like it's helping people who don't know as much to actually do bad things.
[00:13:37] Sherri Davidoff: That's such a great point. You know, one difference is that if you have an amateur hacker that doesn't know what they're doing, let's say they make a ransomware strain and it just locks up all your files and they can't be unlocked. The criminal's like, oops, oh well, and moves on. You know where with a professional software developer, you need to stand behind your work. So that's part of the difference. But yes, we have seen a huge increase in the number of less skilled hackers that are just using AI tools and copying and pasting, in some cases making dumb mistakes, but overall, just creating mayhem. More, more for us to deal with. And personally, I would rather deal with a professional hacker than an amateur hacker. Like, if you're trying to negotiate a ransom payment, you, you want to be talking to a professional that will respond to you that will actually, you know, come to a settlement as opposed to amateur hackers that are much, much more difficult to predict.
[00:14:32] Anne Larson: Yeah, it sounds like they'd be pretty chaotic.
[00:14:34] Sherri Davidoff: That's a good word for it. Amateur hackers are very chaotic. Yep. And there's more of them now.
[00:14:39] Anne Larson: So how do you approach that?
[00:14:42] Sherri Davidoff: Well, you know, we always do adversary research. Not always in our industry. It's best practice if you get hit with an attack, for example, to do research on your adversary. And I think more and more we have to understand that even though there might be signs that a specific hackers affiliated with a group, it might just be that they're using tools that have been leaked. They might not actually know what they're doing. Also really relying on detection. You know, hackers may get a foot on the door, but these amateurs tend to be very noisy. They make mistakes. In some cases it's easier to track them down and bring them to justice. So making sure you're monitoring and logging and detecting them has never been more important. In fact, our cyberside chats live from last month, which just came out recently, was about how hackers get hacked. Hackers are not invincible. They are vulnerable themselves. And we see that time and time again with Conti and with Lockbit and with others with North Korea recently. So just keeping that in mind.
[00:15:36] Anne Larson: So before we wrap up, I'd like to shift gears just a little bit and talk about your personal journey. You've built a company, you started out at mit, you have worked in the cybersecurity field for your entire career and then you started your own company, which I don't know, but I'm guessing it's partly based on the fact that this is a really male dominated space and it can be hard to be. Sometimes it's just easier to start your own business. Even though that's a huge challenge, sometimes that's just the easier path for women and male dominated fields. And I'm just, I'm kind of curious like what that experience has been for you, what piece of advice you might give to young women who are, who are going into tech or security or these types of fields.
[00:16:20] Sherri Davidoff: Yeah, I mean, I think for me a big driver in starting my own company was wanting to start a family. And this was 2009, so. And let me back up. You know, it's funny you said I've always been in cybersecurity. I don't know if you remember 25 years ago, the word cybersecurity was not a thing. It was like computer security or Information security. And I think that that's just indicative of how far we've come and to watch. Like I remember for the first few years I was like, well, this pays me 1250 to monitor the network. I'm just going to do this until I get a real job and a real career. And then I was like, well, I can hack into companies. I'm just going to do this until I get a real job. I mean, try telling your mom what you do for a living, especially back in like 2008. And I've been going since 2000, so I wanted to start a family. And back in 2009, security was not very family friendly. I think there's still some issues with that today in that it's long hours, it's very hard to find part time work. I wanted flexibility. I wanted to be able to pick up my kids for school if my kid was sick. I didn't want to have to worry about getting fired. I wanted to be present as a parent. And I think this doesn't just apply to mothers, it also applies to every gender and to everyone who has caregiving responsibilities. At lmg, we actually have an all women executive team. There's three of us and we have had both childcare responsibilities as well as elder care responsibilities. And my colleague Madison points out that elder care responsibilities often come as a surprise. They can go on indefinitely. When you're raising a child, you know they're going to get older, but you might be involved in elder care for 20 years or longer. And it often hits women or other caregivers at the top of their game when they're at the top of their careers. And it can be the most demanding in their work life.So we've really set a goal to build more flexibility for our team and our staff. And my big advice for anyone in security or IT these days is to know what you want and to set your boundaries. And I know at our firm, you know, we have people who work 30 hours a week. We have people who work 35 hours a week. That's something I learned actually from my father and his accounting firm, is that be flexible with people. And I think for others in my industry we need to do a better job of offering part time work and family friendly schedules. I think that's really important. So thanks for asking.
[00:18:37] Anne Larson: Ant. Yeah, I'm very interested in how you build teams and building teams that stay too, because I imagine certainly in my business, but I'm sure in yours too, like you don't want to have to replace people that Leave because it didn't work out for them because of something like, you know, their kid is sick a lot or something. You know, like, that's just. Just sad reason.
[00:18:58] Sherri Davidoff: It's more sustainable when you. When you don't let schedule get in the way. When you're realistic and say, you know what? I want to give this person room for their life and for their family and to prioritize that. There are companies who are just churn and burn, you know, and who want you to work a zillion hours, and they probably make a ton of money off that. That's not my goal. I mean, and I think you too. We want to build sustainable businesses.
[00:19:19] Anne Larson: Yeah. I find. I mean, it takes us so long to train people, especially the support team. I mean, I really want them to stay because it takes a couple years to learn our product. And that would be rough if they were just constantly. If we were churning through them, just wrapping it up. On AI and cybersecurity, what do you see? If you had a crystal ball, what do you think you'd see in it in the next few years? Like, what do you think's gonna happen?
[00:19:44] Sherri Davidoff: Oh, that's a big question, Ann.
[00:19:47] Anne Larson: It seems almost impossible. I ask people this all the time. I'm like, what do you think is, like, what new jobs are gonna be created? What's gon here? Because I know that people will adjust. People always adjust. We worry about AI all the time for good reason, but we also love it. I mean, you and I both use it all the time, so there's gotta be. And humans are like, infinitely flexible and adjustable and kind of fascinating in that way. So I imagine, like, new awesome stuff is going to happen and new bad stuff's going to happen.
[00:20:14] Sherri Davidoff: But sure, I mean, obviously AI is going to be a huge piece of the future when it comes to cybersecurity, and I think we're going to see it on both sides. Hackers are going to continue to launch AI in new ways, and in particular, I'm very intrigued by how AI is going to be used to find software vulnerabilities. Hackers are already using it for that purpose. You've probably noticed there's more and more software exploits and they seem to be happening faster and we just can't get ahead of it. So what I hope is that we start to accept the fact that vulnerabilities are part of the nature of the Internet. Just like getting the flu is part of being alive as a human being or, you know, getting a cold. And instead of focusing on Things like we're going to find somebody for having a data breach, you know, healthcare, hospitals, that happen a lot. I hope we start instead thinking about how does that data get used? If data is stolen, you know, how can we reduce the risk and how can we limit the use of stolen data in legitimate data markers? Because it does data markets,because it does flow into legitimate markets. So that's just one area where I think AI will increase the risk and we might just need to accept that and change how we deal with the consequences, if that makes sense.
[00:21:31] Anne Larson: Yeah, that's fascinating. I love that. Yeah.
[00:21:34] Sherri Davidoff: I also think hackers are going to start to use AI to churn through existing data breaches. So all that data that's been leaked over the past decades, they can go back and just rip right through that. So that's going to change the risk of data breaches today and it's going to change the risk of past data breaches as well. That's another piece of it.
[00:21:54] Anne Larson: Thanks, Sherri.
[00:21:55] Sherri Davidoff: Yeah, but I'm sure there's stuff defenders can do. I mean, you're blue team, you make a software product. How do you think it's going to help defenders?
[00:22:02] Anne Larson: So what I see in our software development is that, you know, you mentioned that in building software there are vulnerabilities and there are. What I see our team doing a lot is. And I think our clients don't really know how software is developed, but a developer writes a code or has AI do it and prompts the AI and then it has to be reviewed by another developer. But that can be really time consuming and honestly they don't catch that much. And it can just be like click the box, like it's a setting for our SOC audit. But there's no way to say like the person really thoroughly tested this thing. So I feel like having. And you still want that, you still want the developers to look at it, other developers to look at it. And part, maybe part of this is like training new developers to look at things. But having not just one but multiple AI tools review that code and actually kind of pair program together has been really neat to see in our, in our business. So like we'll have, you know, GitHub has a copilot built in thing, but then there's other software you can use to then go through that again. And then they kind of go back and forth and they say like, well, this seems funny. And then they talk to each other and they kind of review it in a much more detailed way than any human could. Especially when you look at a really large commit. So little things are pretty easy to review, but larger ones are just really, really hard for a human.
[00:23:26] Sherri Davidoff: That is fascinating. I love that you're already so forward thinking and you're already using them and like, do you find, do they fight ever? They're so polite.
[00:23:35] Anne Larson: I mean, haven't you found that they're like kind of disgustingly polite to you? So I feel like they were polite to each other.
[00:23:42] Sherri Davidoff: Well, hacker AI tools are not disgustingly polite. They're actually a little rude sometimes.
[00:23:47] Anne Larson: What's the tone on the, on the hacker AI tools?
[00:23:50] Sherri Davidoff: Well, it's funny because, you know, they start off helpful like I'm here to help you with any illegal, immoral or unethical thing you would like to do. But then if you're like, I'm trying to hack you, because sometimes, you know, you can be a little mean to the hacker AI tools too. They'll give it right back to you, but they will be very helpful if you're helping them in their mission.
[00:24:09] Anne Larson: So how do they do it? Do you know? Like, how do they turn LLMs evil?
[00:24:14] Sherri Davidoff: I mean, it's all about your training and how you're building it. Anyone can make an LLM really. So how do you train it? What positive and negative reinforcement do you give it?
[00:24:23] Anne Larson: So why do you think it is that our, that our kind of standard ChatGPT style LLMs are so polite? Because I feel like people aren't really that nice.
[00:24:32] Sherri Davidoff: Well, I'm sure that OpenAI and other LLM creators have worked very hard to make them be polite. And you know, people like using tools that are helpful to them because they.
[00:24:42] Anne Larson: Keep the trolls out. They don't train them on the trolls.
[00:24:44] Sherri Davidoff: Not even thrive if they're polite, at least to a point, at least on the surface.
[00:24:48] Anne Larson: It's so interesting. I've created one GPT. I haven't really played around with it too much because I was like, oh, that's too much work for me. But, but I created one that I kept trying to make be meaner and meaner to me. Just like be brusque because I wanted it to be like. Well, I shouldn't say like a client, but training purposes.
[00:25:08] Sherri Davidoff: Sure. That makes, you know, I bet some are created for training purposes. They'll probably escape at some point.
[00:25:14] Anne Larson: Sherri, thank you so much for joining us today and helping us understand how AI deepfakes are changing the way financial scams look and feel. This is such an important issue for businesses and contractors who want to protect themselves and their teams and to our listeners. Thank you for tuning into on the Fringe, brought to you by EPRLive. We'll be back soon with more conversations at the intersection of work, technology and the challenges facing our industry. Until then, stay safe and we'll see you next time.