I've been investigating some initial applications of the recent Glow model released by OpenAI. Glow is a reversible generative model which uses invertible 1x1 convolutions. It allows users to interpolate between facial characteristics and attributes like hair color, beard density, smile, and age. It can also interpolate between to encoded images by sampling the points between the two images.
To me what is most interesting is not the interpolation, but the subtle changes in still images of faces in between the interpolations. Keeping in the theme of working with fake news, one can image scenarios where one would want to A/B test various false images. One could imaging testing these images to different demographics in a political setting or customers in an advertisement setting.
The ability to subtly transform between age, skin tone, hair color, and facial features is a powerful and potentially harmful (or at least deceitful tool). Imagine a scenario in which polling suggested that voters would be more willing to vote for Barrack Obama if he had lighter skin and more traditionally white-masculine features. Using a Glow network would allow for an unlimited interpolation between these spaces, an create opportunities for hyper-targeted political advertising, potentially at a personal level.
There is a long history of politicians and public figures doctoring their images in photoshop. This came to light most recently with the realization that many of Donald Trumps photos had been doctored to make him appear taller and more fit. With networks like glow, this doctoring could be extended to facial features in seemingly unlimited ways.