- MenúAll NewsNetworks & PlatformsProducts & PlansResponsible BusinessPublic SafetyInside VerizonFinancialHoliday 2024NoticiasNews ReleasesMedia ContactsB-roll and imagesVerizon Fact SheetRSS FeedsEmergency ResourcesCable Facts
"Is the AI Threat Inversely Proportional to Human Error?"
Full Transparency
Our editorial transparency tool uses blockchain technology to permanently log all changes made to official releases after publication. However, this post is not an official release and therefore not tracked. Visit our learn more for more information.
Today, it’s easy to find someone who thinks AI is the world’s largest cybersecurity threat. Luckily, I’m happy to report it’s not a threat – at least, not yet. Could it become a threat? Yes, but it’s important to understand a couple of things first. One, there are many more pressing cybersecurity threats to focus on. Two, conditions of the threat landscape will determine whether and when AI might be commandeered for nefarious purposes. So, let’s take a look at the landscape in order to start tracking AI’s potential hacking trajectory.
There are some hacking use cases for AI: social engineering and pretexting, which essentially refer to tricking users into divulging sensitive information and masquerading as a trusted person or institution in order to gain trust from a network user. Generative AI’s large language models have become quite sophisticated and can mimic people’s speech patterns in a number of different languages, effectively enabling threat actors to scale their social engineering attacks around the globe. Deepfakes have also gotten very good, which would be especially compelling for “vishing,” or voice phishing, attacks. Vishing is simple but very effective. Some of the biggest hacks in recent years owe to vishing tactics. You might be wondering why hackers don’t avail themselves of these incredible technologies. Well, they don’t need them.
Many people labor under the assumption that hackers use the most cutting-edge technologies to crack digital Fort Knoxes, but the truth is malicious actors look for the biggest paydays with the least resistance. Simple attacks – phishing, basic web application attacks, etc. – already work very well. Why would hackers choose a more complex route? They wouldn’t, unless they had to.
The human element has factored in the majority of cybersecurity breaches for years. Last year was no different. According to the Verizon Business 2024 Data Breach Investigations Report (DBIR), more than two-thirds of breaches (68%) involved a non-malicious human element, which can refer to a user falling victim to a social engineering attack or making some other type of error. Hackers prey on human fallibility.
Encouragingly, this year’s DBIR revealed that employees are more likely to identify and report breach attempts, phishing emails, and even their own mistakes, like clicking on a malicious link. For instance, 20% of users identified and reported phishing in simulation engagement, and 11% who clicked the email also reported it. This is a good sign because timely reporting is an effective way to stem the spread of a security incident, preventing it from becoming a full-scale breach. It also suggests that cybersecurity awareness is on the rise and that making a cybersecurity-related mistake is not the stigma it once was. These emerging trends are encouraging and may over time serve to minimize the impact of the human element. Might that close off some of the easier paydays for hackers? Possibly. In that scenario, would they turn to AI? Probably.
How would the security landscape change if most hackers were utilizing AI? It would become virtually impossible to distinguish between the real and the artificial. Emails, text messages, voice calls and even video calls could not be trusted. In that world, a zero-trust approach, a model that acknowledges threats can come from anywhere and requires strict, continual authentication of users, would be a necessity. While a heavy lift for some organizations, if you’re looking to stay ahead of the curve with regard to security, zero trust is an effective strategy.
AI may become a cybersecurity threat, just as quantum computing, which could transform the requirements of encryption protocols overnight, may become a cyber threat one day. This remains speculation for the time being. While it’s important to stay abreast of developments within cybersecurity, we’d do well to keep our eyes trained on the more immediate threats. Still, it’s important to remember that as we plug security holes like the human element, and as more conventional routes are blocked, malicious actors will look for new tactics.
Related Articles
The “human element,” which in cybersecurity refers to actions like human error and privilege misuse, has proven to be one of the biggest causes of cybersecurity breaches over the last several years.