Recent advancements in generative adversarial networks have created opportunities to develop highly realistic fake images and videos. While much of the attention in this area is paid to hyper-realism in images, I ague that equal attention should be paid towards alternative points of leverage around fake images. As trust in visual imagery is undermined, campaigns around the use of fake images, much like "fake news" will become more pronounced.

Generative Adversarial Networks (GAN) are able to produce images from a learned probability distribution of an image dataset. This allows the network to create new, unique images similar to images in the dataset. For more information on GANs please see my blog post. This is the same type of network used in the "deepfake" images.  

Image from: Synthesizing Obama: Learning Lip Sync from Audio

One of the trends in GANs is towards hyper-realism. With the recent advancements in large-scale generative adversarial network models for images and videos, the concern over misuse of these technologies has intensified. Politicians have drafted legislation to hold platforms accountable for spreading faked information, including GAN generated images, and warn of the upcoming onslaught of fake text and images.

While hyper-realistic images are extremely useful in many current scenarios,  I argue that it is not necessary to produce highly realistic fake images or text to sow doubt into public discourse. Merely the presence of potential fakes are enough to create suspicion in traditionally trusted institutions and sources. Generative images need not be photorealistic to create their intended effect, nor do they need to be fake at all. One such example this is the unconfirmed “deepfake” video  surrounding the release of video featuring the president of Gabon.

In The Presence of Fakes - Ali Bongo

Ali Bongo was elected president of Gabon in 2009, following a successful but ethically questionable career in the political spotlight under the leadership of the Bongo family, which has held control of Gabon since 1967.  In late 2018, 59-year-old Bongo purportedly suffered a stroke and had removed himself from public view, raising questions around his health and leadership capabilities. On New Year's Eve of 2018, he appeared in a taped New Year’s address to the public from Morocco, where he was receiving medical treatment.  "[The] ..speech is proof that President Ali Bongo is fully recovered. His health problems are now behind him,” according to presidency spokesman Ike Ngouoni. With his health already in question and the center of much of the national debate, the release of the New Year’s address video fueled political tension.

Ali Bongo's New Years presidential address

The controversy and speculation around this video led the opposition party, including Bruno Ben Moubamba who had run against Bongo in the previous two elections, to suggest that the video was a deepfake. Moubamba notes that the eyes in the video seem “immobile” and “almost suspended above his jaw” and that the eye movements are out of sync with the movements of his mouth. “The composition of several elements of different faces, fused into one are the very specific elements that constitute a deepfake.”

Facebook comment on deepfake by opposition leader Moubamba

While there is no consensus on whether the video is in fact a deepfake, the very suggestion of the use of deepfakes has stoked division and helped to undermine the government of Gabon. On January 7th, members of the Gabonese Republican Guard took over a radio station and announced that they had seized control of the government to "restore democracy". The attempted coup was eventually quashed, but many have speculated that the deepfake added fuel to mounting coup d'etat.

In the US, had a leader or public figure moved out of the political spotlight, there would be wall to wall coverage on their status. However, there are numerous places around the world where current leaders' absences and health are unknown, and questions of their status and leadership abilities abound. Recently, Algerian president Abdelaziz Bouteflika, for instance, found himself in a similar position. After suffering a stroke in 2013, Bouteflika generally removed himself from the public eye, prompting many Algerians to question his health, or whether he was alive at all, and it was only after intense public scrutiny and protests throughout Algeria just days ago, that he resigned and promised not to pursue a 5th term as the country's leader.

Speculation around public figures who have removed themselves from the spotlight, for health reasons or otherwise, have an enormous impact on public discourse, particularly around political leaders who have been out of the public eye. We have witnessed this phenomenon most recently  in the US with speculation surrounding Melania Trump and look-alikes.

Images surrounding accusations of fake Melania Trump (left: "imposter" Melania, right: real Melania)

As the technology for developing deepfakes and GAN generated images accelerates, the landscape around images public figures will shift dramatically. I've posted previously about potential misuses in politics around Creating Fake Images with Glow and OpenPose + Pix2Pix. Generating cropped shot images of a politicians face requires high def footage that hide tell tale signs of GAN output. 1. This typically requires a very well constructed OpenPose  + vid2vid pipeline (along with traditional video editing) 2. A Deepfake generative adversarial network  model, which would require an actor that can convincingly represent the targets body movements and feature or 3. A composite bi-directional LSTM from audio to facial mask movements as documented in 'Learning to Lip Synch' publication. As we look at  human faces most of our lives, we are uniquely adept at pinpointing flaws in facial movements and features. Added together, putting together a convincing fake is a tall but not unreasonable task.

However, the mere presence of deepfakes is enough to have social and political ramifications. This opens up an opportunity to develop deceptive campaigns around images and video. As our trust in the concreteness of images is undermined, our ability to throw images into question at the slightest hint of potential forgery increases. This opens up potential avenues not only to promote fake content, but also to drive campaigns around the truthfulness of images and videos, fake or otherwise.  There are plenty of initiatives and research developing techniques and methods in how to spot deepfakes. DARPA's recent MediaFor initiative, for instance, has invested $68 million in a effort to detect faked media including deepfakes.  As hype and intrigue around deepfakes reaches a fever pitch, we also need to consider secondary effects of deepfakes on institutions and public trust. Speculation itself can have real social and political consequences.

References