How AI is supercharging social engineering attacks
Social media and advances in artificial intelligence have both played a large role in the growing spread and sophistication of cybersecurity attacks. Despite all the warnings and trainings to be more diligent, employees remain one of the greatest threats to cybersecurity.
Among the greatest threats currently are deception-based social engineering attacks, like deepfakes, which are about exploiting trust, timing, and human behavior — not software flaws. These threats use deepfakes and generative AI together to significantly amplify existing social engineering threats rather than creating entirely new ones. The biggest shift is how easily attackers can now scale and personalize deception.
“We’re already seeing this fuel familiar fraud types such as business email compromise (BEC), account takeover (ATO), and synthetic identity fraud, but with much higher success rates,” explains Jason Bartolacci, director of the ProSight Fraud Alert Network. “Generative AI allows threat actors to rapidly produce convincing emails, messages, and narratives that mirror real employees, customers, and executives, removing the common red flags that previously caused fraud targets to pause or question legitimacy.”
Success builds on multi-layered attacks
Attackers use deepfake video calls to impersonate executives during live calls, deepfake audio calls to play voices during phone calls, and deepfake videos during recruitment or business partnering scams.
Deepfakes are usually deployed at the moment when the attacker needs to close the deal, say Paul DeMott, Chief Technology Officer at Helium SEO. Email establishes context. Messaging builds familiarity. Then a voice call or video appearance removes the victim’s final hesitation. Especially when the workplace doesn’t have a culture of verification, many staff will just assume that higher up they don’t normally talk to would rather they just execute the request instead of taking a moment to verify it.
“I’ve seen voice cloning used very effectively, especially when the attacker already understands ‘normal’ communication,” say Ian Schlakman-Holub, CIO at Alpine Mar, a Miami-based accounting firm. “The goal isn’t to hold a long conversation; it’s to create just enough authenticity to push someone into acting quickly. One of the most effective and frequent attacks to use this method are Invoice Fraud. This isn’t a new method of attack. But with Gen AI able to make an identical copy with the right details changed, followed up by a cloned voicemail to the right person in accounts payable, Invoice Fraud attacks are more successful than ever.”
What is consistent across all of these attacks is the sense of urgency or indifference of the victim, Schlakman-Holub explains. The attacker wants the victim to act before they stop to verify or ask someone else. The moment an employee verifies the request the attack is usually foiled.
Targeting the same old user vulnerabilities, but more effectively
These attacks too often succeed because they target people who are busy, under pressure, and trying to do the right thing but too quickly, That has long been the case, but the fraud techniques are vastly better and make employees much more vulnerable, explains Schlakman-Holub. “The attacker isn’t hacking software, the network or an app. They’re hacking a workplace where there’s not a protocol or culture of verification.”
Significantly upping the risk to organizations is the scale and precision of attacks. Generative AI allows threat actors to study an organization’s communication patterns, hierarchy, and language, usually as easily as reviewing a website. Then they impersonate someone convincingly enough to bypass skepticism.
AI tools can generate realistic identity documents and then pair them with live video or audio deepfakes to defeat controls such as liveness checks or step-up verification, Bartolacci explains. This creates a false sense of assurance that traditional identity controls were never designed to withstand.
Further, these attacks aren’t random anymore—they’re tailored, timed, and emotionally engineered, Schlakman-Holub says. When an attacker uses Gen AI they convincingly create identical invoices, very convincing voice clones, or emails that use specific local and industry vernacular.
Realistic communications are often not what they seem
Current deepfakes have overstepped the boundary between something that can be recognized by a close examination and something that can be trusted when communicating in real life, DeMott explains. Modern technology is now able to imitate voices using just a matter of seconds of audio recording and video usable within a matter of a few seconds of video calls, especially with the compression and quality alterations that video conferencing programs do indeed introduce that actually serves to its advantage masking deep fake effects.
The misconception is that deepfakes need to be flawless, Schlakman-Holub says. In reality, they only need to be convincing for 30 to 60 seconds. If someone high up in your organization that you rarely talked to video chatted with you for just 30 seconds, would you really just dismiss it as a possible deepfake, he asks? At the very least, if they’re making a request of you, the first thought of most employees would probably be how to please such a high ranking person, even if they came across as a little odd.
“The most impressive and concerning examples I’ve encountered weren’t about the deepfake alone,” Schlakman-Holub says. “They were multi-stage attacks where the attacker demonstrated deep local knowledge of a town and industry before finally using voice cloning. Several emails were first sent as a potential client. And, most likely using Gen AI, the attacker sent dozens of emails back and forth with sales reps in an attempt to prove they were a legitimate construction business with a local office.”
“The sales reps finally CCed their CEO on these emails,” Schlakman-Holub explains. “Then the attacker leaves a voicemail with perfect local dialect on the CEO’s voicemail. Finally, the CEO reached out to us – as he was correctly suspicious. We made the simple decision to call this business that the attacker was impersonating. The staff at the construction company confirmed that they had an email breach and other companies were reporting similar attacks just like this.”
By the time the attacker moved from emails to voicemails, the sales reps involved already believed they were dealing with a legitimate person, Schlakman-Holub says.
“Thankfully the CEO acted on his suspicion and reached out to us. After this attack the CEO consulted with us to do regular staff training about these sorts of attacks. Then worked hard to change the culture of their organization. A simple phone call to the publicly listed number of that construction company at the beginning of all of this would have thwarted the attacker immediately.
Steps for defending against attacks
In order to defend against deception-based social engineering attacks, from a technology standpoint the basics still matter, Schlakman-Holub says. This includes strong email authentication, monitoring tools, anomaly detection, and strict approval workflows for financial and access-related actions. But technology alone is not enough.
“From an employee perspective, training is critical,” Schlakman says. “Users need to be taught when to slow down, when to be suspicious, and that escalating concerns is a good thing that will be rewarded, not punished. They need to know they won’t be punished for reporting something that turns out to be harmless.”
For MSPs, MSSPs and IT departments specifically, this means preparing SOC and support teams for a significant increase in these escalations. That’s not a failure, that’s your employees being more scrupulous, which is a success
DeMott recommends that organizations implement multi-factor authentication of high-risk transactions over already established lines of communication; implement an AI-based tool of deepfake detection by analyzing audio and video data to identify any evidence of manipulation; implement a protocol with multiple approvals through independent channels to act on a high-value action; and invoke out-of-band confirmation which is un-encryptable and indefeasible.
“To increase awareness among the employees, train employees to recognize high-risk situations,” DeMott says. These include executive urgently requesting their attention, or odd requests that do not follow established procedures.
“Create a culture of employees feeling empowered enough to validate communications with the public through a secondary channel, and explicitly investigate possible suspicious communications with the public with an award to employees who report possible threats.”