An employee in the finance department at a retail company recently got a call from his CFO directing him to wire $700,000 to a business the company was in the process of acquiring. The executive noted that the transaction was extremely time sensitive.
It seemed a bit out of the ordinary. But the employee, not wanting to ruffle feathers and question the CFO, promptly carried out the order from his boss and made the money transfer.
The problem is, the voice on the phone was not the CFO’s. It was an extremely realistic deepfake voice impersonation generated using artificial intelligence, and because of the attack the retailer lost the $700,000 to a cybercriminal. The fake CFO provided instructions that would enable him to intercept the funds, rather than having the money go to the target company.
“The combination of the authenticity of the voice, the sense of urgency, and the CFO being in a position of authority resulted in the employee not asking critical questions or verifying the request,” says Michael McLaughlin, cybersecurity and data privacy practice group co-leader at law firm Buchanan, Ingersoll & Rooney, which represents the retail company.
The request for the financial transaction deviated from standard operating procedure, McLaughlin says, but the employee was so convinced it was the CFO on the call that he went ahead with wire transfer.
The tipoff was when the acquisition target called the retailer a few days later asking when it should expect payment to arrive.
To address the incident and prevent similar attacks from being successful, the organization implemented several measures, including enhanced verification protocols for financial transactions requiring multiple approvals, McLaughlin says.
This includes verifying all requests by independently calling a known number for the requester, regular training sessions for employees on identifying deepfake content, and collaboration with cybersecurity firms to develop detection tools and response strategies.
Fake-out threats on the rise
The incident is one of a growing number of deepfake attacks against organizations, and CISOs and other cybersecurity leaders need to work with business executives to bolster defenses against such attacks.
Deepfakes don’t just involve celebrities and other public figures anymore. Virtually anyone at any time can have their likeness used for the commission of cybercrimes. According to a 2024 survey conducted by Deloitte, around 15% of executives said cybercriminals targeted their companies using deepfakes at least once over the previous year.
The problem has drawn the attention of the US Congress. In April 2025, a bipartisan group of senators reintroduced legislation to address the issue of unauthorized uses of voices and likenesses for AI-generated deepfakes. The No Fakes Act would give individuals the right to authorize use of their likeness and voice in a digital representation, in an effort to reduce the use of deepfakes.
The legislation would hold individuals or companies liable if they produce an unauthorized digital replica of an individual in a performance; hold platforms liable for hosting an unauthorized digital replica if the
platform has actual knowledge of the fact that the replica was not authorized by the individual depicted; and largely preempt state laws addressing digital replicas to create a workable national standard.
In one of the biggest known deepfake attacks, engineering group Arup lost $25 million in a videoconference scam that employed fake voices and images.
Real-world fabrications
Even security vendors have been victimized. Last year, the governance risk and compliance (GRC) lead at cybersecurity company Exabeam was hiring for an analyst, and human resources (HR) qualified a candidate that looked very good on paper with a few minor concerns, says Kevin Kirkwood, CISO.
“There were gaps in how the education represented in the resume, but beyond that it was immaculate,” Kirkwood says. During the online interview, the candidate “was a bit scripted in her responses and appeared to be trying to answer questions that the HR screener wasn’t really asking, but they were still good answers.”
The interviewee was passed forward to the GRC team, which conducted its own video interview. “Almost immediately, they began to notice some oddities,” Kirkwood says. As the interview progressed, the team noticed additional things that raised concerns.
“The person seemed to be too stationary, not blinking, not moving her body, and the facial expression remained the same,” Kirkwood says. “The mouth did move. The answers that the person was giving were still not directly aligned to the questions that were being asked.”
The GRC manager approached Kirkwood about the interview and what she had experienced with the interviewee. “It rang a bell with me and I pulled up a website that explained deepfake videos, and she immediately said, ‘That’s exactly what that was!’”
Kirkwood’s team shared the finding with the HR team “to create awareness that this was something that we were beginning to see and that it was going to be something that we expected to occur more often,” he says. “Awareness, at the time, was enough. HR was trained on how to spot the anomalies and screening became a more intense process with recruiters looking for specific factors in the video.”
At the time of the incident, video processing and deepfake tools were not as advanced as they are now, Kirkwood says. “Just a few short months ago you would have been okay with using most visual cues to identify when a person was using a deepfake,” he says.
The use of the deepfake in the interview was believed to be an example of the North Korean fake IT worker scam that organizations have been contending with increasingly of late.
Another security firm, KnowBe4, experienced a similar incident in July 2024 when it discovered that a newly hired employee named “Kyle” wasn’t from Atlanta as stated, but from North Korea.
“We were dealing with a synthetic identity along with a deepfake image,” says James McQuiggan, security awareness advocate at the company. “Within moments of receiving his company-issued laptop, Kyle attempted to install malware. Security tools triggered alerts, and the team quickly isolated the device and the account before any damage occurred.”
Had the alerts not triggered, Kyle might have stayed undetected for much longer, McQuiggan says. On closer inspection, Kyle’s job application was fabricated, including an AI-generated headshot, he says. “Further investigation revealed Kyle was part of a North Korean campaign to embed operatives in organizations for espionage and financial gain.”
The incident was an unsettling wake-up call for the cybersecurity and HR teams, McQuiggan says. “The hiring process had been fully remote,” he says. “There were no red flags in background checks. The submitted documents passed standard HR scrutiny. The attacker used convincing synthetic identity techniques — a blend of factual and fabricated details, all created to evade detection.”
Fighting the fakers: Tips for securing your enterprise
To effectively protect against the threat of deepfakes, organizations need to adopt a multi-layered defense strategy that includes teaching employees what to look for in such attacks, Buchanan, Ingersoll & Rooney’s McLaughlin says.
“Key defenses include employee training and awareness programs that educate staff about the risks posed by deepfakes and the importance of verifying requests before acting on them,” McLaughlin says. “Additionally, organizations should implement strict verification processes for sensitive actions, such as financial transactions.”
Here are several steps CISOs can take to help combat this rising business threat.
Implement deepfake awareness training. Deepfake awareness education “is the only control for humans,” says Paul Perry, risk advisory practice leader at accounting and advisory firm Warren Averett. It should provide understanding about why there are so many deepfake attacks and what the latest tactics are, “to help them understand when something is out of the ordinary and unusual,” he says.
Training needs to teach users to question everything they receive instead of taking the quick way out and just responding to a request without pause, Perry says.
Employee awareness and training programs need to be ongoing rather than one-offs, and the training should teach employees to spot red flags such as stilted audio, mismatched lip movements, and urgent requests, says Mithilesh Ramaswamy, senior engineer at Microsoft.
“To effectively guard against deepfakes, it’s essential to integrate training and skill-building into daily workflows, rather than relying on infrequent, one-off sessions,” Ramaswamy says. “This ongoing approach helps employees stay prepared to identify and neutralize manipulated video and audio content as soon as it appears.”
Conduct drills and establish clear internal policies. Threat actors often rely on human lapses in judgment, Ramaswamy says. He also recommends simulation exercises as part of the learning process. “Conduct drills or tabletop exercises, where employees practice responding to suspicious calls or videos claiming to be from executives,” he says.
In addition, clear internal policies and processes play a critical role in stopping deepfakes.
Scrutinize business workflow policies for potential weaknesses. At KnowBe4, the hiring process has been significantly revamped to mitigate risks, McQuiggan says. “It includes adding greater scrutiny to background checks to ensure a comprehensive evaluation of candidates,” he says. “When a laptop is shipped to a new hire, they are to collect it at a local UPS store and provide matching identification to ensure it’s the correct person.”
To keep the hiring team informed and vigilant, HR and recruiting teams receive updated briefings on deepfake tactics. This aims to equip them with the knowledge necessary to recognize and respond to sophisticated deception techniques, McQuiggan says.
From a process perspective, having multiple verification points much like multi-factor authentication for passwords can be applied when dealing with videos and deepfake voice requests, Perry says. “Once the request comes in, individuals need to do X, Y and Z to further approve or validate the need,” he says.
“Unfortunately, history tells us technology grows, so the threats will grow as technology enhances itself,” Perry says. “Constant validation or verification — or the human interaction — will become key.”
Revamp incident response plans. Enterprises can also develop and regularly update incident response plans that outline how to respond to suspected deepfake incidents. This can enhance organizational resilience and readiness to handle such threats, McLaughlin says.
Consider investing in deepfake-targeted tools and skills. AI-based detection software that can analyze video and audio content for inconsistencies and signs of manipulation, and flag suspicious content before it reaches employees or other stakeholders is a wise investment, McLaughlin says.
“Employing digital forensics experts can further enhance media authenticity verification through techniques such as analyzing metadata and pixel-level anomalies,” McLaughlin says. “Additionally, utilizing blockchain technology for content verification can help establish authenticity by embedding digital watermarks or hashes in legitimate media.”
Some deepfake defense tools on the market might be limited at this point, however. “There are new tools that help detect deepfake videos that claim to be able to identify patterns that repeat in the representation of the deepfake video,” Exabeam’s Kirkwood says.
These are interesting, until you consider that you will have to layer this detection in place in line with whatever communication tool the organization is using to conduct interviews, Kirkwood says. “I would prefer to have those [communications] tools be the source of the detection and layer it in,” he says. “It would be a case of AI detecting AI and alerting.”
Know the law. Enterprises need to be aware of applicable law in relation to the consequence of not addressing deepfakes once they are known, says Reiko Feaver, partner at CM Law, a privacy and data security attorney whose practice focuses on AI.
“Not just statutory laws, but common law concepts such as negligence, tort, misrepresentation, [and] fraud,” Feaver says. “Companies have to be aware of not only becoming a victim but of their obligations if they are a victim and don’t do anything about it.”
The original article found on Deepfake attacks are inevitable. CISOs can’t prepare soon enough. | CSO Online Read More