Live Deepfakes: Calls From the Near-Human

An image of three computer-generated faces, representing the concept of live deepfakes.

Share this post

Live Deepfakes: Calls From the Near-Human

Who do you think they are? No, really—can you be certain that the person talking to you from the other side of your screen is truly the person they say they are? Deepfake advancements are making it increasingly difficult to trust someone’s word, much less your own eyes. The newest development in the deepfake industry is the real-time deepfake, where a live virtual call could be presented by a false, digitized version of an authentic person.

In case you need a reminder, deepfakes are commonly known as images or videos of people that have been altered to appear as if they are doing something they have not. AI technologies incorporate content found online to render these representations of real people. Public figures and celebrities have been the most popular victims of deepfakes, as their images and voice exist in abundance on the web. However, the average person has now become at risk of becoming a deepfake, particularly if they are frequently active online.

One blockchain professional recently fell victim to a live deepfake scam: a deepfake artist posed as the blockchain professional, scheduling and leading 20-minute-long meetings with potential investors. The deepfake victim only discovered he was being impersonated after receiving messages from clients who expressed doubts about his identity.

This example might seem like a silly, failed scam. But the deepfake impersonation could have had sinister results: clients were asked to invest their Bitcoin in the deepfake’s false business ventures. Further investigations revealed fake social media profiles and video recordings asking clients to attend the deepfake’s scam meetings. While this deepfake artist failed to complete their crimes, these virtual representations are becoming more prevalent, and not so easy to immediately identify.

Listen Up: Real-Time Audio Deepfakes

Alongside visual deepfakes, you might also encounter an audio deepfake. Voicemails left by deepfakes are already rampant; these audio recordings mimic a real person’s voice, and typically ask the listener to take an action, such as send the caller their personal information. But a real-time phone call could take place between you and a deepfake scammer. A common deepfake phone call scam involves an employee receiving a ring from their boss. When the employee answers the phone, they will hear their boss—someone they likely trust—telling the employee to wire transfer a large sum of money to the boss’s personal bank account. Then the wire transfer is performed, leaving the deepfake criminal in possession of company funds.

The advent of machine learning technologies has enabled these fabricated phone calls to mimic the voice of a real person. Just like visual deepfakes, audio deepfakes require data from online resources, like videos or podcasts recorded and published by the subject of impersonation. Text-to-speech algorithms enable some computers to create fake audio samples in under thirty seconds now. These samples can say innocuous greetings and harmless statements, but they can also urge listeners to perform criminal acts.  

To Catch a Deepfake: Identifying Real-Time Scams

With real-time deepfakes now added to the cyber criminal’s repertoire, you may encounter one of these live holograms during online research and investigations.

Luckily, real-time deepfakes are still in their infancy, which means they have a few fatal flaws. If you want to catch a deepfake in real time, ask the person you suspect to turn sideways. If they are real, you will have a normal view of their profile. If they are a deepfake, their face will look distorted, blurred, or otherwise unsettling. Why? Since deepfake technologies need at least a few photos of their target’s face to create 3D renderings, they also need images of their target’s side profile. But most people have only published front-facing images of themselves online. Consequently, side profiles will always appear unnatural until more images of the target appear on the web, or deepfake technologies improve.

As a form of deepfakes, you can also identify these live versions by employing simple techniques for discovering regular deepfakes. During a webinar hosted by a suspected deepfake, you could scrutinize the presenter’s facial expressions and eye movements to watch for unnatural patterns, like failing to blink for long periods. During a phone call, pay attention to the urgency of your conversation partner; do they ask you to send them money or personal data as soon as possible? You can typically uncover deepfake scams by practicing caution and discernment when interacting with online users. While audio deepfake samples might sound disturbingly real to the average listener, researchers have discovered that the vocal capabilities exhibited by audio deepfakes may not match the vocal capabilities of a real human.  Similar findings will help lead the development of tools that can quickly assess a live deepfake’s authenticity.

To research potential deepfakes while protecting your identity, it’s important to use a managed attribution platform. A managed attribution platform allows you to control the appearance of online identifiers, such as your IP address, geolocation, and other digital identifiers, such as cookies. By utilizing this type of technology, you can prevent a potential deepfake artist from duping you throughout your mission activities.

Deepfake impersonators want your time, trust, and personal information. Evaluate virtual meetings and phone calls before giving away your valuable assets to these real-time scammers. And if you’re an online researcher, be wary of believing everything you see or hear before capturing it as authentic data.     

Related Article: OSINT Techniques: Discovering Deepfakes