February 1, 2024

AI at the Gate: Navigating the Future of Cybersecurity with SonicWall’s Bobby Cornwell

By Randy Ferguson

Navigating the Future of Cybersecurity

In the face of the digital age’s advancements, AI’s role in cybersecurity presents both innovation and challenges. CloudTweaks welcomes a Q&A with Bobby Cornwell, Vice President strategic partner enablement & integration at SonicWall, to discuss the pressing issue of AI-enhanced cyber threats. With Cornwell’s extensive experience in network security, this conversation aims to uncover the complexities of AI-driven attacks, their impact, and defense strategies. We’ll navigate through the intricacies of combating AI-enhanced threats, exploring SonicWall’s approaches to safeguarding digital integrity in an era where cyber risks are constantly evolving. Join us for a critical exploration of the future of cybersecurity in an AI-dominated landscape.

RANDY: Can you elaborate on how threat actors are using Large Language Modules to compile and utilize breach data in novel ways? How does this differ significantly from traditional methods of cyberattacks?

BOBBY: Converting data of this nature into something readable by a LLM takes a lot of skill and resources. It’s not something that anybody will be doing as a hobby today. However, if you look at the threat industry, you will see that this is a multibillion-dollar industry. It’s also an industry that has sponsorship by different governments. If you take the elite threat actors, give them the resources they need, then this data would be put into a LLM and would be used in a way that has never been seen in the past. Again, this is not something that I personally feel is being done at a large scale today but is either in the infancy stage or in the concept stage. 

Imagine the possibilities if threat actors were able to take the data from this leak, plus data from other leaks and combine everything into a LLM readable format. At that point, threat actors could ask a compromised or modified AI platform that they pulled from an open-source hub, to find patterns, trends, etc. of all the people in this database. They could potentially ask, “I want to know everything about Bobby Cornwell. I want to know where he lives, where he used to live, where he works, what kind of stuff does he buys. Has he ever been to these type websites? Who is his family? Are they religious? Who are his kids? What school do they go to? What games do they play? How many times have they gone to the doctor? Does he pay his bills? What’s his credit score? How many credit cards does he have?  Etc.  And, in an instant, they would have a detailed report of everything I mentioned. 

From there, they could cross reference other names, friends, and even look at my LinkedIn profile to see if anybody that was in any data breach matched my breach and see if we were in the same place (like a conference) together.  They could then take that information and conduct all manner of nefarious activities. Could they call the electric company and turn off my power? Probably so. Could they turn off my water? Probably so. Could they call my credit card companies and change billing addresses? Probably so. Could they contact me directly and tell me that they know all the bad stuff about me (there isn’t any, by the way😊) and use that as extortion to gain money? They 100% could. 

Now, I’m just a person, but what if they did the same thing for a big political leader? What if they knew of a guy that was designing the next generation nuclear warhead? What if they targeted him and his family to where he either gave up secret information, or what if they knew so much about him, that he thought he was talking to his boss or superior officer because of generative AI?  

I know this seems surreal, like some Hollywood movie concept but this, while again not overly easy, is something that is capable today. Can they do this with OpenAI’s platform, or platforms by Google or other mainstream AI? No. These platforms currently have built-in ethical protections. But if you give a hacker an open software platform that has extremely skilled knowledge, then that platform can be modified.  

To circle back to a traditional method; when data breaches started happening, it typically involved someone’s credit card info and/or their social security numbers. Credit card data was sold on the dark web in bulk. You could go to some sites, and they would have credit cards listed in different categories with different prices. There would be “verified”, meaning they verified the card worked by charging a few pennies on the card to see if it would authorize. There were “verified high limit”, meaning they were able to verify that the card worked, and had a high limit available to charge, and there were the “unverified”, which obviously were cheaper to purchase, but you would also get a greater amount of card numbers for your money. Those threat actors would then take those credit card numbers and sell them to other people looking for a quick buck purchasing things like gift cards. Gift cards were super easy to buy and use because there were websites where you could resell gift cards for less than their face value. Not only was this an easy way to launder money, but it’s an activity that’s almost impossible to track. Threat actors knew banks would simply pay the card holder back for the lost money, and they would just close the account and write the loss off.  

Today, this is still done, but now there are more breaches with different data. For example, medical records are being stolen. This contains your policy information, which can be put into a database and searched. But to be useful, a threat actor needs to either know what they’re searching for specifically or they must figure out how to build queries for that data. Not entirely different than what AI would do, but without AI it’s difficult to cross reference data. An attacker would need to run different queries, import that data into a single database, then rerun specific queries. This is a time-consuming process and like everyone else, they’re always looking at ways they can improve efficiency.

AI eases the ability to cross reference data, identify patterns, track individuals and anybody associated with that individual.

RANDY: You mentioned the emergence of companies using aggregated breach databases. How does this change the landscape for both hackers and those looking to protect personal information?

BOBBY: Historically, if I signed up for a “dark web search” of my data using the company that provides that service would charge a subscription fee and anytime their software platform identified anomalous behavior, I would receive and alert. While effective, it takes time and lots of resources to accomplish this task. Having the data in a single place improves speed and efficiency, allowing today’s dark web scanning platforms to instantly tell you how many places your information has been leaked. 

Imagine I’m speaking with you in person, and I get a real-time alert from a dark web search scanner that my password for my company showed up in a breach of data that was just poured into a database (like we are discussing). I could immediately stop what I’m doing, change my password, and ensure I exit out of any programs or connections that could allow lateral access to my corporate infrastructure. That would be huge, as the more advance notice we have, the faster we can move to secure our infrastructure from attack. 

On the flip side, imagine if I was a threat actor, and I get paid based on the quality of information I provide. But now these dark web scanning platforms are messing me up because they are tapping into the same resources as I use and are notifying people quickly that their accounts have been hacked. I would essentially have stale data and would not get paid. On the flipside, if hackers have access to a database like we are discussing here, either via subscription from a main hacker or state sponsorship, imagine how fast they could target people and exploit them. Imagine how sophisticated and accurate a phishing attack could be knowing everything about the employees of a company they are targeting? I should note that it was just revealed that North Korea is using AI in advanced cyber-attacks. 

RANDY: After checking your details on Malwarebytes, you discovered a significant amount of your personal information had been exposed. Could you share how this experience has influenced your perspective on personal cybersecurity?

BOBBY: I feel businesses are not taking their security posture as seriously as they should. For example, I recently received a letter from a mortgage broker that they “detected” a breach between October and November 2023. They claim that they do not have evidence of my information being used, but the data that was compromised included my name, address, phone number, email address, social security number, and data of birth. In short, it was everything a threat actor needed to wage an attack on me. My questions to the mortgage lender were why wasn’t that data encrypted and what systems did they have in place to protect my data?  

As someone who works in the cyber security arena, I constantly hear about company security equipment, such as firewalls, disk encryption software, identity and access management (IAM) software and more not being included in a company’s P&L and is thus sometimes overlooked. I’ve also seen IT staff to employee ratios way off, such as companies that have one IT/security person for 100+ employees. This makes it near impossible for that individual to properly watch everybody and understand everything that is going on. I’ve witnessed phishing attacks that are breaching businesses that you would have never expected. 

Cyber security should be at the top of every business’s mindset regardless of company size. Businesses need to be accountable versus spending cycles on finding legal loopholes when security events occur. If that happened, we would see more investments in network security technology.  

There are so many great managed service providers (MSPs), managed detection and response providers (MDRs) and cyber security vendors out there that have solutions designed to combat these challenges, but they seem to be underutilized to a degree. 

Then there are vulnerabilities, which are natural and not unexpected. For example, when I see a vulnerability, I don’t look at it as a weakness of a company, I look at it as a company identifying a problem and fixing it. And while people like to point their finger at the company that had the vulnerability in their software or hardware, they don’t talk about the people within an organization who were notified of the vulnerability and failed to update their product. 

RANDY: You’ve mentioned that the media is exaggerating the recent breach. What are some common misconceptions you’ve noticed in media reporting about AI-driven cyberattacks?

BOBBY: For example, in this most breach news, reporters have been prone to use hyperbole, such as “mother of all breaches”.  What does that even mean? When I first read the first news story on this subject, my first thought was “oh crap, a company just got taken to the cleaners with tons of stored data”. However, once I started reading between the lines, it became clear that the bulk of the data was made up of data aggregated from previous breaches.  

RANDY: What specific steps would you recommend individuals take to protect themselves from these new forms of AI-enhanced cyber threats?

BOBBY: AI enhanced cyber threats are going to be dangerous. Technology to discover this type of advancement is being created proxy to the threat actor’s use because it hasn’t been something that was on anyone’s radar. What companies have been focused on was using AI to become more efficient at threat hunting by utilizing security information and event management (SIEM) tools and AI in combination. However, threat actors of all levels are starting to utilize AI platforms to create advanced phishing attacks. When it comes to phishing, most network security platforms look for language and grammar errors. For example, emails that are written in English, but have odd statements, punctuation and/or words that make no sense because the people writing them have English as a second language. AI can now make any phishing email look like it was written by an English major.

To better protect against AI-enhanced phishing emails, you need to look at the email address the message is coming from, mouse over (but do not click) hyperlinks to ensure they are going to a legit site that you are certain is real, and do not open attachments (even if you trust the sender), without having some advanced form of endpoint security and advanced email security in your network that scans attachments. If you’re still unsure of the URL, copy everything after the @ in an email address and paste it into a site like Virus Total (https://www.virustotal.com/gui/home/url), or ask your IT department.  In a world of AI-enhancements, you unfortunately cannot trust anyone.

Don’t be scared to sign up for dark web searching company. Malwarebytes.com offers a free email scanner that will cross check your email address against known breaches (https://www.malwarebytes.com/hibp). Some insurance and credit card companies offer this service for free. If you see something that’s come up, stop what you’re doing and change that password immediately. The next step is to make sure you’re not using the same password for multiple logins. If hackers have one password from you that they know works, they’ll attempt to use it elsewhere to see if they can access anything else that might be attached to your name and email. If you own an iPhone, click on settings, go to passwords, and apple actually let you know if your saved passwords have been compromised in any breaches. If you are a business, invest in a firewall (not a “router with security”, but a real firewall) and ensure the hardware’s firmware up to date and correctly configured. I could list more tips, but these are a few of the big ones.  

RANDY: How do you see the evolution of cyber threats in the context of rapidly advancing AI technology? What new types of threats should we be prepared for?

BOBBY: As I mentioned above – expect to see AI used for highly sophisticated phishing attacks and attacks using compromised data that simulate the habits of an impersonated person or targeted individuals or businesses. We expect to see new AI-driven methods to go after high value targets, who can afford to pay a ransom or extortion demand and evolved attacks on vulnerabilities both new and previously disclosed.   

RANDY: In your view, what should be the role of corporations in protecting against AI-driven breaches, especially in safeguarding sensitive customer data?

BOBBY: Corporations need to realize investment in security platforms is key. Layered networking is essential. Ensure you have not one firewall, but maybe two different models that are able to back each other up. It’s not about, “That firewall claims it caught more than the other firewall”, it’s about, “I had two firewalls, one didn’t catch the threat, but the second one did, and we saved ourselves from having to pay hefty fines and losing customer confidence in our business.” Ensure your endpoints are secure and that you have the right people with the right IT to employee ratio watching your network. MDRs are a great way to accomplish this. They are trained and have the right equipment to watch your back 24/7. A simple monthly fee gives you peace of mind that is far more valuable than anything else. 

RANDY: What kind of governmental or regulatory measures do you believe are necessary to combat the rise of AI-powered cyberattacks?

BOBBY: It’s already in the works. CISA, NIST, and several others are already putting together professional and industry standards, including guidelines for schools, government, and businesses. It’s going to take this and more from the lawmakers to drive the point home.

RANDY: Looking ahead, how do you foresee the intersection of AI technology and cybersecurity evolving, and what should individuals and organizations do to stay ahead of potential threats?

BOBBY: I think AI will be used by cyber security companies to build better products and make them more efficient and effective. As for when actual security products will be able to detect when AI is being used maliciously, I think that is still to be determined. 

Cyber security companies like ours are starting to make plans. In the future, companies that don’t innovate and have proper honeypots and telemetry data to get samples will have a hard time keeping up.  

In terms of individuals and organizations – they need to educate themselves on the potential threats, know what to look for, and what to do if they detect something suspicious. There should be a plan of action for people who are not sure if an email is legit. There should also be more help for individuals at home to better help them understand how threats work and how they can affect them or their family. In closing, education is the key ingredient to being ready for the threats of tomorrow. 

By Randy Ferguson

Randy Ferguson

Randy boasts 30 years in the tech industry, having penned articles for multiple esteemed online tech publications. Alongside a prolific writing career, Randy has also provided valuable consultancy services, leveraging a deep knowledge of technological trends and insights.

PODCAST SERIES

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.