You have probably heard about “Deepfakes” in the media with the proliferation of fake news, synthesized videos made to manipulate conversations or speeches, and the alteration of images that threaten the legitimacy of information that are presented online. Consequently, our collective belief of accepting videos and photos to be reliable records are no longer viable.
Deepfakes entered into public consciousness in 2017 when an augmented video of former president Barak Obama emerged in social media. The video was convincing, however, fake and potentially damaging.
Cumulative number of GAN paperboy month: https://github.com/hindupuravinash/the-gan-zoo
Deepfakes are a product of a type of Artificial Intelligences called Generative Adversarial Networks or GANs. In 2019, we saw over 2000 different types of GANs that were publicly available as research materials and open-source code. As a result, the technology became readily available to everyone. These technologies, when implemented with malicious intent, may affect the quality of public discourse and the safeguarding of human rights.
Although Deepfakes, as the word describes it, may seem malicious, these technologies can be beneficial when done responsibly. For example, GANs help media and fashion companies correct images of photoshoots. They can fix a closed eye, a wrong facial expression, or an incorrect human pose. In Fashion, we see GANs used in the creation process of design via sketch-to-image transfer and machine-generated fashion designs . These are what we call Creative Machines. We also see the rise of licensing in the use of avatars of a model or celebrity in lieu of an on-location photo or video shoot. Other examples include domain-transfers. With these, background environments of models can reflect any season or weather condition.
With the ease of use and commodification of GANs, malicious Deepfakes will continue to rise. This is especially prevalent as we head into the next presidential elections in the US. These capabilities of fake media creation in videos, photos, or speeches are an emerging threat to digital communication online. They affect news organizations, brands, and social media platforms.
Deepfakes are here to stay, and their impact is already being felt everywhere. Lately, we have seen a multi-organization attempt to solve this problem. For example, one such initiative is the DeepFake Detection Challenge (DFDC), led by Microsoft, Amazon, Facebook, and The Partnership on AI Coalition. This initiative crowdsources a problematic solution for identifying DeepFakes.
Trending in 2020 will be the rise of Deepfake detection algorithms and digital signatures or watermarks (perhaps with blockchain) to countermeasure fake videos and images posted online. In 2020, we predict a rise in DeepFake detection and verification startups and VC funding to develop a range of technology solutions. These solutions will protect organizations, brands, and individuals. This prediction is based on our analysis of deepfakes and synthetic media landscapes.
------------------------------------------------------------------------------------------------------
Julian is an innovation strategist at Iterate.ai. Based in Silicon Valley and a graduate of the Kellogg School of Management at Northwestern University, he provides tech advisory in innovation, strategy, and diligence to Fortune 500 companies. Interplay is an Innovation Middleware Platform that specializes in fast, low to no-code deployments of applications in blockchain, AI, IoT, computer vision, and big data.