The advent of deepfakes began nearly three decades ago, in 1997, with the inception of the Video Rewrite Program. The “program” was, in actuality, paper that showcased an innovative (for its time) video-dubbing program to help movies sync re-recorded audio.
In the paper, co-authors Christoph Bregler, Michelle Covel and Malcolm Slaney provided examples of how they had manipulated peoples’ likelinesses in older videos to make them look and sound as if they were saying new lines. Some 20 years later, a now-infamous video of president Barack Obama emerged, warning Americans of a new technology the nation’s enemies could use to make anyone say anything.
Ironically, Obama’s mannerisms and words spoken in the video are not his own. It was a deepfake video that used actor/director Jordan Peele to mimic the former U.S. president.
Deepfakes are artificially-generated media developed by machine learning algorithms and artificial intelligence (AI) that, as the Video Rewrite Program intended, aim to appear real. They are created by training the AI model’s machine learning capabilities with large quantities of data about a person, which it then uses to generate new content—mostly for harmless humor, but the technology’s potential to spread false information disingenuously cannot be overstated.
Indeed, some deepfakes are so realistic that it has become increasingly difficult for the average person to discern them from authentic images or videos. The technology behind deepfakes is rapidly becoming more advanced, so much so that it has already caused notable damage to both individuals and organizations.
Risks and Challenges Posed by Deepfake Technology
The most obvious potential of deepfake technology being used for malicious purposes lies in its capacity to create and spread falsehoods to manipulate public opinion; this also defrauds the individuals and organizations whose images are used by the technology. Recently, for example, deepfake pictures of former president Donald Trump being arrested for his arraignment spread like digital wildfire. And because deepfake technology utilizes machine learning capabilities, media such as this becomes easier to alter or create the more it is used.
More important, however, is the risk deepfakes pose to cybersecurity. As hypothetical, let’s say a hacker uses the technology to generate a seemingly-real image of an individual’s face—perhaps the CEO of an investment firm—to access their personal emails, passwords and even audio recordings of their voice. From there, deepfake technology could be used to create entirely new videos using the individual’s face.
Another potential risk deepfakes pose is the integrity of evidence used in criminal trials. Should an image or video of a person doing something illegal prove to be a deepfake, this could pose a serious threat to the credibility of videos used as evidence. In fact, there has been some success with deepfakes fooling facial recognition technology.
As deepfake technology rapidly advances, it creates an urgent need for new AI detection and prevention strategies in cybersecurity, yet this is difficult considering how quickly the technology is evolving. To meet this challenge, AI detection programs must be continuously and proactively updated in order for them to detect every subtle difference between authentic and fake media.
Mitigating the Effect of Deepfake Technology
Thankfully, there are still precautions individuals and organizations can take to protect their data from the cyberthreats posed by deepfake technology. Always make sure any photos you share cannot be downloaded or screenshotted. Blockchain technology, for instance, can be used to provide a secure and immutable record of media, making it more difficult to create and spread deepfakes.
Likewise, generative adversarial networks (GANs)—a subfield of AI and a recent innovation in machine learning — learn in layers to build compounded concepts out of less difficult ones. It accomplishes this through the use of two neural networks: One to create and produce the deepfake content and the second to evaluate that content and strengthen deepfake detection software to discern the subtleties of deepfake-created media.
For organizations, it’s essential to work with all team members and stakeholders to understand where the weaknesses are in your internal digital network. Collaborating with other researchers, developers and government agencies can help attenuate the further spread of disinformation.
While the original intention of deepfake technology was to provide new benefits in how we approach media production, it’s crucial to stay vigilant and prepared for its potential caveats. The ability to create convincing-yet-fake media poses a significant challenge to those seeking to stop the spread of disinformation and increased cases of digital fraudulent activity. In a world where fake news can have severe consequences, it’s vital to prioritize the development and implementation of effective strategies to detect, prevent and mitigate the risks posed by deepfake technology.