Deepfakes on Social Media
How Social Media Platforms are Addressing Synthetic and Manipulated Images
In mid-2019, a video of Facebook co-founder and CEO, Mark Zuckerberg, appeared online. Staring into the camera, Mr. Zuckerberg calmly states, “imagine this for a second: one man, with total control of billions of people’s stolen data.” Then, with all the hyperbole and panache of a Bond villain, Mr. Zuckerberg concludes, “I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”
The video, understandably, went viral. The only problem—Mark Zuckerberg didn’t make the video. It was a deepfake, and stark reminder that the emerging technology has the potential to amplify the challenge of identifying disinformation online.
Identifying mis- and disinformation online has become a key public policy and national security priority for public and private sector decision makers alike—particularly in light of the upcoming presidential election. In response, tech firms have increased their collaboration around identifying, cataloguing, and potentially removing intentionally misleading content—including deepfakes.
Deepfakes are images and videos created by machine learning and artificial intelligence (AI) in which an algorithm alters and imposes an original image or video. They are an especially sophisticated challenge because—just as with Mark Zuckerberg—they can make it appear as though a prominent politician or public figure is saying or doing something they did not. The deepfake of Mr. Zuckerberg is uncanny; he gestures to the camera as employees of the Silicon Valley tech giant mill around in the background. A “CBSN” news chyron captions that the Facebook CEO is announcing an initiative to increase ad transparency.
The potential that bad actors will use deepfakes to spread false information online has only increased, with some 62% of adults in the United States getting their news from social media. In response, Google, Microsoft, Facebook, and other tech firms have begun to collaborate on various efforts to build or refine tools that can automate deepfake detection. One of the largest examples is the Deepfake Detection Challenge (DFDC), a joint venture between Amazon Web Services (AWS), Facebook, Microsoft, and the Partnership on AI. The DFDC is a grassroots competition to make the best deepfake detection tool for up to $1,000,000 in prizes.
Since the beginning of 2020, Twitter, Facebook, YouTube, and TikTok have all announced policies for synthetic and manipulated media. In February, Facebook announced an expansion of its partnership with Reuters, an international news agency, to cover deepfakes and other forms of misinformation. A team of four fact-checkers, two based in Washington D.C. and two based in Mexico City, will review content “across the spectrum of misinformation formats” ranging from old videos attributed to current events to deepfakes and other edited images.
In November 2019, Twitter announced plans to “place a notice next to Tweets that share synthetic or manipulated media; warn people before they share or like Tweets with synthetic or manipulated media; or add a link…so that people can read more about why various sources believe the media is synthetic or manipulated.” Twitter has also reserved the right to take down synthetic or manipulated media that “is misleading and could threaten someone’s physical safety or lead to other serious harm.”
Efforts to identify misleading or manipulated content present both opportunities and risks for researchers, analysts, and investigators. On one hand, open source tools that can identify synthetic or manipulated media can be used to detect, disrupt, or map its dissemination across various platforms. On the other hand, however, efforts to use an artificially generated image online may be more difficult as these tools are created and deployed.
The video of Mark Zuckerberg is not perfect—the voiceover is clearly not that of the Facebook CEOs and some of the minor facial expressions betray the video’s false aesthetic. It’s likely, however, that these tells will be resolved as AI improves; thereby increasing the probability that we all—at some point—will be deepfaked.