Has AI Arrived or Is It All Just Hype? | EP 037
Transcript:
Connor Swalm:
Welcome to gone phishing, a show diving into the cybersecurity threats that surround our highly connected lives. Every human is different. Every person has unique vulnerabilities that expose them to potentially successful social engineering. On this show, we'll discuss human vulnerability and how it relates to unique individuals. I'm Connor Swam, CEO of Phin Security, and welcome to gone phishing.
Hey, everyone. Welcome back to another episode of Gone Fishing. And today we have the special treat of being joined by not only a good friend of mine, Jimmy, but the VP of revenue at CyberQP and also the warlock of privileged access management, as he was telling me before we started this call. How are you, Jimmy?
Jimmy Hatzell:
I'm doing phenomenal, Connor. I'm happy to be on with you. I'm in a soundproof room, I think you're in your parents' living room.
Connor Swalm:
Not my parents' living room anymore, but it is a living room nonetheless. I graduated from the basement about two years ago. If you've been reading the LinkedIns, you would have seen that.
Jimmy Hatzell:
Yeah.
Connor Swalm:
So I see a lot of you doing stuff with artificial intelligence, or at least talking about it. And so that's one of the things we're talking about today. So a question I have, is AI here or are we? How far? If you could look at your warlock, if you could look into the future, into your magic crystal ball, and you could see when AI is genuinely going to be integrated into our tools and our software and how we work, what would you say?
Jimmy Hatzell:
I would say it's going to come faster than people think. I'd say twelve months from now, most software, new software will have AI integrated into it. I think the big things that we're going to notice is like call centers. When you start talking to, when you call instead of being wait on hold, blah, blah, I think that's going to be switched to AI in like three years, about which will change a lot of things for us, and then into AI assistants always around us and answering questions and teaching kids in school and that kind of stuff. That will come later. But I think major cost centers of business, such as support, that's going to be the area that is disrupted the fastest and will be the most noticeable.
Connor Swalm:
I always made a statement that the less creative, the job or the role, that the easier it'll be for AI to actually be implemented there and to disrupt it. What are your thoughts on that?
Jimmy Hatzell:
Yeah, well, an AI doesn't have feelings. Well, it depends who you ask. But when I'm calling whatever Microsoft call center to try to know support or something like that, and I'm just yelling at them because I've been on hold for 3 hours. Not to say that I yell at anyone. I know it's not their fault. The latest episode of Sonny, you didn't design this system. But yeah, I think that stuff will definitely be like, you'll be talking to AI people. People. It's going to be weird. What if we're already talking to AI people and we just don't know it yet? I don't know, man. I could be in is maybe Jimmy Hatzel. The real one died six months ago and.
Connor Swalm:
I'm talking to AI head. You're just a person on a screen in a random box.
Jimmy Hatzell:
Yeah. Yeah. It's suspicious that I have such a homogeneous background, right?
Connor Swalm:
Huh? Very suspicious. It looks a lot like wework stations that I've seen, but that's AI would know that, so we should be wary. So three years. Three years for the ability for some kind of artificial intelligence system to completely. In the telephone even though somebody didn't design the system and they shouldn't be yelled at.
Jimmy Hatzell:
Three years. That's it? Yeah, I think so. I think we're that close. Just from what I've seen, the AI voices sound like people. They can process information really well and fast. It's going to happen sooner than we think. There's also, like, deep fakes and clones and that kind of stuff. That'll be soon after because it's getting a lot better at mimicking people's voice, a lot better at generator of video. And just things are accelerating crazy fast. So I don't know. That'll be soon after. But I think the call center one is an easy one for people to sort of comprehend because of how painful it is. And it's the same thing. Like, all these big companies offshored all of their call centers to save costs, and they're going to do the same thing to AI to save costs.
Connor Swalm:
Just so for everyone listening, Jimmy's contact information will be wherever we can find him, will be in the show notes. But I was watching something you did on LinkedIn, which is why I mentioned this, where you trained, I forget what website at, what tool it was, but you trained it on your voice and then gave it a script after it had been trained on your voice to say, and while it didn't set, like I would know that it's not you. If the connection weren't quite perfect or if I weren't talking to you all the like, I wouldn't have known that wasn't hatzel. Like, it sounds like Jimmy Hadsel. That was here now. Do you think right now AI has just gotten a lot of hype? Is used as a selling buzword?
Jimmy Hatzell:
Definitely. I mean, it's been that way for a long time. But you look at OpenAI and the ability to have access to these generative models so easily and how easy it is to just hook their API into different products and add it on. We made a jump from here to way over there overnight. I would have hated to start or been in the AI business and spent a lot of money working on things. Right before those APIs came out, it just changed. It's like dial up Internet versus high speed broadband, like HD DVD versus, or whatever the other ones were when Blu ray came out, right? Let's use that. What's like the one that nobody used, the really big disc that was like, in between VHS and DVD? People were like, these are going to be it. I don't know.
Connor Swalm:
Blu ray, VHS, Blu ray. I'm old enough to remember VHS. I don't remember that. I did watch a lot of VHS growing up, I promise. So it's mostly a buzword now. But you're saying kind of what I'm hearing is things are progressing real fast, as technology typically does. And it's like you had mentioned, getting access to OpenAI overnight changed the way a lot of businesses could continue to function and also probably put a few others out of business if they were trying to build their own generative models. Yeah, definitely. What does the future of AI look like, both as a solution and some people are scared of it. What does its reputation look like moving forward? I don't know.
Jimmy Hatzell:
I mean, AI is like, it's based on the information available to it, which is information on the Internet. And the Internet is generally like people's worst version of themselves. For example, Reddit changed their pricing model or whatever their API rules, and it pissed a bunch of people off because all these AIs were trained on Reddit's data. But training people on Reddit's Data is a horrible way to impersonate people. You get the worst version of the worst people. And I can say that because, you know, Redditor, and I've been on Reddit for years, but that is not what we want to base humanity's consciousness off of.
Connor Swalm:
I was listening to a podcast where this guy basically said, AI is like a really intelligent fill in the blank. So it's like you give it a sentence and then it ends with a word and AI is like, oh, I think I know that word. But like you said, it's based off of completely what it's been trained on. Which in the case I've read it, I can only imagine what that last word is going to end up being. Yeah. I don't know. It's like a six year old trained in nuclear physics.
Connor Swalm:
So you're in the MSP industry? I'm in the MSP industry. Next logical question, how should MSPs consider using AI moving forward? What are some ways you're seeing now and what are some ways you think are going to happen in the future?
Jimmy Hatzell:
I think MSPs need to get out ahead of it with their customers. I think when I talk to business owners, they're telling me that they're getting questions from their clients, asking, is it safe that my employees are using this stuff? I think John Harden and Alvick Ceslio that whole product, they're doing monitoring on SaaS and showed that like 8% of employees have used Chet GPT on company networks. So people are using it and they're just putting their data out there and nobody knows what to do. Right? Like nobody knows what to do about it. So MSPs have a real opportunity on thought leadership. So that's like one bucket. Become experts in it the same way you became experts in printers years ago and networking devices and cybersecurity and all that stuff. This is just the new technology thing. You have to know it and be able to talk to your clients and advise on it.
And then internally, there's definitely some efficiencies in your business. I would say right now AI is very good at non sensitive information. So, like marketing automation, that kind of stuff, where if there's a mistake, it's not the end of the day, I wouldn't unleash AI on your financial reporting or anything like that. You need to be careful in your business with how you're using it. Right. If it's like following up with tickets and closing things out, yeah, that's great. But if it's changing settings in Microsoft security or things like that, you need to be a lot more careful because it can make mistakes and things. And there's going to be great technology that does it all flawlessly or near flawlessly. If there's not already. There's cool companies doing lots of really cool stuff, and then the third thing is supporting your customers with AI.
So every business in the world is going to start using AI over the next six months, year, five years, ten years, however long it is. And the way that they need help connecting to a printer, connecting, getting their email right, they're going to need help getting the AI to do the thing that they need it to do, whatever that is. And we don't really know what it is yet. And MSPs are going to be the person going to be the people that get those calls. And so I think there's a whole future area of business for MSPs there when it will happen, I don't know.
Connor Swalm:
That's a really interesting take on it. Is in the same way that MSPs have had to be experts in virtually everything else. I'll air quote for those of you just listening in technology, because printers are technology, but when you think of what an MSP does, it's not necessarily just plugging a printer into a wall. It's like all of these other way more complicated systems and things that keep businesses running. And AI could be just another one of those that it's like, hey, you're going to have to learn how to prompt engineering. That's kind of what you were getting at. It's like if your clients don't know how to speak, ask the right questions in the right way, then you're going to have to help them with that. Yeah. And connecting one system to the other. Right. Like how do I make this thing work with this thing? Right. There's a big area where integrators, right. Integrators for a long time where they just connected applications and there's going to be a lot of that is happening on the tech side, on the product development, sorry, like the development side. But there's going to be smaller integration the way that you connect Salesforce to Office 365 or connect office 365 to log in other systems, there's going to be a lot of that with AI. That's going to not work great because AI needs data to do what it needs to do and users control the data. So how those things are all connected is going to be very complicated, I think.
Connor Swalm:
So you mentioned data security a tiny bit. How big of an issue should that, how much in the forefront should data security actually be when it comes to asking AI questions or training it on internal company stuff? In your mind.
Jimmy Hatzell:
Right now, if you use chat, GPT on their website, they can use any information that you put in for training purposes. So if you are constantly asking about IP of your company and all that, it could leak out that information to another user in the future. We don't know entirely the same way that you wouldn't upload company secrets to Gmail, your personal Gmail account. You can't have your employees sending data to systems like chat GPT that aren't managed by your organization. And I think a lot of people need to start thinking about it that way. This is like the personal email, right? People are going to leave with that information if they leave the business, and it can get leaked to other people, too.
Connor Swalm:
You know, it'd be really interesting is if you knew somebody was using chat GBT. If another user somewhere could say, what is the email of this person who has also been using chat GBT? If that data really is up in the cloud somewhere and that data has been used to train the model, would it be able to get, could a directed enough question access it perfectly? That'd be really interesting.
Jimmy Hatzell:
I mean, it's supposed to not, but that stuff's going to happen. And it may not be that exactly, but there's going to be tons of data leakage, training data not scrubbed, stuff like that. And there's a lot of biases, too, whether that be racial biases, political biases. You can't make an unbiased system. It's trained on people. So that's a whole area of it. And there's lots of arguments, maybe we should turn all the filters off and then we should turn the filters on. I don't have all the answers to that, personally. I think that there should be some definite moderation. Like I said, it's all trained on Reddit data and need to moderate that a little bit. But not everyone agrees with me, and that's okay. I know, like Elon Musk trying to make his unfiltered version of OpenAI, of GPT four, basically, and there's going to be open source ones too, that have no moderation. But if I ask it the recipe for making something that could blow up a building, it shouldn't tell me the recipe probably shouldn't.
Connor Swalm:
Probably shouldn't. Any last minute advice for anyone listening on? What's one thing they could do to learn a little bit more about AI? I think that's where a lot of people are these days, is just wanting to know where to learn.
Jimmy Hatzell:
Just start using it. Don't use it with your company data, but just start using chat GPT and asking questions. Ask it how to learn. Ask it how to learn more. I think that we all have an opportunity to be on the forefront of this thing, and the more people who understand it, the more luck you'll have or opportunity you'll have.
Connor Swalm:
I should say I completely agree. For those of you listening, the adventure, invention or the creation and the implementation of AI has some have compared it to the same level of impact the internet had on the human race as well, like the invention of the Internet. So if it's even one quarter that impactful, it's definitely something that you should be looking at today.
Well, thanks for joining, has been, this has been a lot of fun.
Jimmy Hatzell:
Yeah, my pleasure. Thanks for having me on.
Connor Swalm:
Once again, I'm your host Connor, CEO at Phin Security, and Jimmy's contact information is LinkedIn or whatever he'd like to share with us is going to be in the show notes, and he does some really cool stuff with AI on his LinkedIn that I see all the time. I don't know if you're doing that anymore, but I used to see a lot of it, so hopefully this will cause you to do some more.
Thanks so much for tuning in to gone fishing. If you want to find out more about high quality security awareness training campaigns, how to launch them in ways that actually engage employees to change their habits, then check us out Phin Security at Phinsec.io. That's P H I N S E C . I O or click all of the wonderful links in our show notes. Thanks for fishing with me today and we'll see you next time.