With the rise of AI-generated deepfakes, which are becoming more convincing and widespread in video conferencing, companies are finding it increasingly difficult to ensure they know who they’re actually talking to during online meetings.
We’re seeing the damage cybercriminals can cause when they steal credentials using social engineering, only to then bypass outdated MFA systems to access critical data or deploy ransomware. These older MFA systems just aren’t cutting it anymore, leaving companies vulnerable in ways that are making the news almost daily.
Deepfakes are videos of a real person that have been created using artificial intelligence (AI). The Deepfake AI technology creates a very convincing copy of a real person. These videos make it appear as though the real person is saying or doing something they haven’t actually done. Deepfakes were originally created for entertainment and artistic purposes but unfortunately, they are now being used for fraud and other malicious activities.
Interactive deepfakes take the deepfakes a step further. They not only mimic a person’s appearance and voice but they also enable real-time interaction during live events, such as video conference calls. They are so advanced that they can respond to questions and participate in conversations. They are extremely difficult to detect due to the lack of technology to recognize the fake participant.
Deepfake is AI technology that uses machine learning algorithms to create realistic, but entirely fake, images, audio, and video. In order to create this fake media, the AI must analyze huge amounts of data. This can be in the form of video, voice recordings and photos. By analyzing the data it learns to mimic mannerisms, appearance, voice, and movements. It then generates images, audio or video which is very often difficult to distinguishable from the real thing
One way these realistic forgeries are created is by using neural networks and a process called Generative Adversarial Networks (GANs). This process involves two AI models: one generates the fake content, and the other evaluates its authenticity. It is constantly refining the output which makes the deepfake increasingly difficult to detect. By feeding the AI large amounts of video and audio data, it can learn to replicate a person’s expressions, voice, and gestures and the result is that it can convincingly imitate the original.
Cybercriminals are getting smarter and upping their game by creating deepfakes that imitate high-level executives. They are using these deepfakes in real-time video conferences. These fake videos are shockingly realistic, making it incredibly difficult for anyone to recognize if they’re dealing with the real person or an AI-generated deepfake. As these AI-generated deepfakes become more accessible—likely by mid-2025—companies will struggle to verify whether the person on the other end of the call is truly who they claim to be. This can result in serious consequences, such as the disclosure of sensitive information, unauthorized financial transactions, or even worse.
Perhaps you heard about the case that was reported by CNN. An unfortunate finance employee at a global company fell victim to fraud. The scammers used deepfake technology during a video call and posed as the company’s CFO. During the call they managed to convince the employee to transfer $25 million. Unfortunately, the deception wasn’t uncovered until the next day, when the employee met the real person in the office. By then, it was too late. The money was gone, and there was no way to recover it.
Biometric identification is a way of confirming someone’s identity by using physical or behavioral traits like fingerprints, facial features, voice, or even the iris of the eye. But with deepfakes making face and voice recognition unreliable, we’re left with only fingerprints and, potentially, iris scans as trustworthy options. Compared to traditional methods like passwords, PINs, or legacy MFA, biometrics offer a much more secure and convenient alternative. These traditional methods can be lost, stolen, or hacked, while biometric identification helps protect against identity theft, fraud, or impersonation—risks made worse by deepfakes.
For biometric ID to work effectively, it has to be convenient, something that’s always with you, and compatible with any device. It’s also critical that the biometric data stays on the device itself rather than on a network where it could be stolen or misused. Devices with a secure element can lock away this information in a way that makes it nearly impossible to access—even if tampered with. Unlike network storage, which can be an attractive target for cybercriminals trying to steal millions of fingerprints in one go, a wearable device is hard to lose and nearly impossible to extract information from. Token Ring is an example of next-generation biometric security—it's always with you, and it will keep your data safe.
Wearable biometric security can protect against deepfakes in video conferencing platforms like Zoom, Teams, and others. Each participant will be required to authenticate themselves with their unique biometrics, such as a fingerprint, before they can access the meeting. An encrypted token will be generated and sent to the conferencing platform, ensuring that the person on the other end is who they claim to be and not an AI-generated imposter. Since this system relies solely on your fingerprint, it guarantees that it’s the real person behind the video feed and not a deepfake. All participants will be checked to see if they have a valid token before they will access the digital meeting meaning that all participants will be confident that the conversation is both genuine and secure.
Unlike passwords and IDs—which are easily found on the dark web and can be used by anyone, including hackers armed with deepfakes—biometric security offers a much higher level of protection.
For thousands of years, the majority of us have been relying on recognizing faces and voices to identify the people we’re speaking with. Up till now, this has worked well, especially with video conferencing. Unfortunately, times have changed, and the person you see and hear on your screen might not be who you think they are.
As interactive deepfakes become easier to create, it’s becoming clear that by the end of 2025, biometric verification tied to a verified ID should be mandatory for everyone joining a video conference. Soon, we’ll have to trust nothing else. If someone tells you, “I forgot my ring at home,” it’s a red flag—hang up. It’s probably not them. An interactive deepfake cannot replicate a fingerprint or generate the required secure token. If verification is enforced, they’ll likely disappear and move on to an easier target.
As with many advances, this shift may create a divide: those who adopt wearable biometrics and those who stick with outdated logins and MFA. Unfortunately, the latter group will be more vulnerable, and cybercriminals will target them. They’re already doing so.
Want to learn more about Token's biometric smart ring that will help protect your organization from interactive deepfakes? Request a demo