5 Ways Cybercriminals Are Exploiting Deepfakes in Cybercrime in 2024
Look at how deepfake technology is being used by cybercriminals to evade KYC procedures, perpetrate financial fraud, and initiate malicious schemes.
Look at how deepfake technology is being used by cybercriminals to evade KYC procedures, perpetrate financial fraud, and initiate malicious schemes.
The most worrying application of AI is deepfake technology. We looked last month at how cybercriminals are using AI in illicit ways. This article extends that discussion to cover how criminals are using deepfakes and a subcategory called "deep voices" for voice-based attacks. Video, audio, and image content is faked with deepfake technology. A novelty turned into a weapon for cybercriminals.
Cybercriminals are using deepfakes to bypass know-your-customer (KYC) processes, particularly on financial and cryptocurrency platforms. Some institutions require video identification, where users record themselves and submit personal identification documents. Cybercriminals are creating videos that look like real people and opening new accounts under another person's name using deepfake technology.
Such a tactic is not limited to account creation. Cybercriminals use deepfakes to bypass account recovery processes in cases where a financial account has been flagged for suspicious activity and locked. They make realistic videos that satisfy identity verification and grant access back to the compromised account. This technique is still fairly rare but is being employed in targeted attacks where financial gain is at stake.
A further alarming use of deepfake technology is voice spoofing, also known as "deep voices." Tools for allowing cybercriminals to duplicate/create voices are increasingly common on underground forums. These tools bypass voice authentication systems—a security measure many financial institutions use.
Some banks, for example, use voice recognition for account access. A customer might say a passcode or phrase, and the system will grant access based on voice tone and patterns. By imitating the voice of a real account holder, attackers can fool these systems into granting access to bank accounts.
The dangers don't stop there. Attackers may also imitate trusted people—a colleague or relative—to manipulate their targets. So, a cybercriminal could impersonate a CEO and get an employee to authorize a fraudulent transfer of funds. Voice spoofing in corporate espionage and fraud has enormous potential, and as the technology improves, these types of attacks are likely to increase.
Darker uses of deepfake technology include creating explicit images or videos for blackmail. This disturbing tactic is unfortunately becoming more common. Cybercriminals can place a face onto images of explicit content and make the person look like they are in compromising situations.
Victims of these attacks are often threatened with the release of the fabricated content unless they comply with the attackers’ demands, which often involve financial extortion or other coercive measures. Although victims can claim the content is fake, the quality of modern deepfake technology can make it extremely difficult to distinguish real from fake. This makes the threat of exposure psychologically damaging, even when the material is not genuine.
Cybercriminals are also using deepfake images and videos to forge realistic-looking documents. By capturing images of individuals from social media or other sources, criminals can create fake IDs, passports, and other official documents, which they then use to commit fraud or sell on illicit marketplaces.
The quality of these forgeries has improved dramatically, making them difficult to detect without biometric verification. While many digital services only require a scanned copy of an ID for verification purposes, these fraudulent documents are more than enough to bypass such systems. This opens the door for criminals to hijack accounts, commit financial fraud, and gain access to restricted services.
Malicious advertising, also known as “malvertising,” has been a longstanding method for spreading malware, but deepfakes are giving this tactic a new level of sophistication. Cybercriminals have started to create deepfake videos featuring well-known public figures endorsing products or services. These advertisements appear legitimate and are often distributed through trusted ad networks, such as Google Ads.
In many cases, the individual in the deepfake video appears to be promoting a product, but the link directs viewers to a phishing site or malware download. Alternatively, viewers might be encouraged to enter sensitive information, such as credit card details, under the guise of a legitimate transaction. These deepfake ads are incredibly deceptive, as they use trusted figures to lend credibility to fraudulent schemes.
As we observe big advancements in LLMs and large investments in AI, particularly in cybersecurity, it’s evident that we are only beginning to uncover this technology’s potential.
Blink is an ROI force multiplier for security teams and business leaders who want to quickly and easily secure a wide range of use cases, including SOC and incident response, vulnerability management, cloud security, identity and access management, and governance, risk, and compliance.
With thousands of automations in the Blink library and the ability to customize workflows to fit your specific use case, Blink Ops can significantly improve your security operations. Get started with Blink Ops.
Blink is secure, decentralized, and cloud-native. Get modern cloud and security operations today.