Social media began testing our credulity years ago, but society’s techno-information disorder might be getting worse. Deepfakes, or “synthetic” computer-generated videos, are the next generation of truth killer.
Our ability to digitally record has held politicians to account for things they said behind closed doors, has mobilised and united people in common struggles, and has supported crucial testimony in prominent prosecution cases.
But the latest advances in artificial intelligence (AI) are degrading our capacity to discern authentic video and audio recordings from artificially fabricated inventions.
While there are light, harmless, and even useful applications for synthetic video technology, opportunities for its dangerous exploitation abound.
A deepfake is an inauthentic digital video of a person created using AI. It can be understood as human-centric special effects powered by algorithms. It often involves mapping the face or words of one individual on to that of another, making it appear as though they have said or done something they did not.
The viral TikTok video of fake Tom Cruise ranting and purporting to perform magic tricks was one particularly sophisticated example of the medium. The video required a collaboration between actor-impersonator Miles Fisher and visual effects specialist Chris Ume, as well as months of work training the AI and polishing off the edges, frame by frame.
Many deepfake videos take much less time and effort to create.
One video depicted former President Barack Obama disparaging his successor with some uncharacteristically colourful language; another showed Facebook CEO Mark Zuckerberg boasting about the power he wields over billions of people through his company’s data harvesting practices.
None of these were real videos.
Some of the more prominent examples were made to amuse or simply resonate with the public. While other creators sought to highlight the potential for deepfakes to cause harm. Deepfakes can be extremely convincing and could easily be used to sow discord in society.
According to the Brookings Institution, a well-timed forgery could even alter the outcome of an election: “These realistic yet misleading depictions will be capable of distorting democratic discourse; manipulating elections […] and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
While governments tend not to rely on open-source information in making strategic decisions, the public certainly does.
A case recently emerged of a woman in the United States accused of having used deepfake videos to harass and damage the reputation of members of her daughter’s cheerleading squad. This episode demonstrated clearly how the technology could be used for nefarious ends against everyday people.
A number of deepfake specialists consulted by journalists concluded that some of the clips used were almost certainly not deepfakes, adding a layer of complexity to proceedings. Police officers were unable to differentiate between the real and synthetic media, taking it on face value that all images and videos in the case were fake, as the victims claimed.
This points to another major issue.
The fact that deepfakes now exist not only creates doubt about whether or not video evidence is authentic, but also offers a way for people to claim that even real videos of their actions are forgeries.
“Besides the fact that you can no longer trust the images you see, a politician can also exploit deepfakes to deny certain statements,” explains Mark van Rijmenam, a future technology strategist and entrepreneur.
The Watergate scandal could have played out quite differently if all involved had had the luxury of denying recordings were genuine. It is also easy to imagine how such deniability would be deployed by dishonest actors and authoritarian regimes to control information ecosystems.
“My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts” says Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley’s School of Information.
The arms race
As an emerging technology, synthetic videos are evolving at an incredible pace. One issue is that as their quality increases, glitches and errors in AI generated images grow fewer and fewer all the time.
As one team of developers notices a weakness that can be used to identify an image or video as fake, another team integrates this into the algorithm they use to produce deepfakes. The cycle is reminiscent of the continual exploitation-threat detection skirmish encountered in the world of cybersecurity.
Once it was noticed that AI-generated faces did not blink correctly, this fact was leveraged as a means of detection. But very soon, deepfakes emerged with exceptionally natural blinking.
Companies such as Facebook and Microsoft launched the Deepfake Detection Challenge, a call for developers to produce opensource detection tools in September 2019, but the “arms race” could be endless.
In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University — “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.”
But it is important to acknowledge that the use of another person’s image without or against their consent can cause lasting harm, whether images are plausible or not.
The Reddit user Deepfake gained notoriety by posting pornographic films of celebrities with their faces recast on to adult film stars bodies. This grim origin story is in fact where the deepfake phenomenon surfaced, and from where the technology derived its name.
More troubling still, a study by cybersecurity company Deeptrace conducted in 2019 found that 96% of deepfake videos online were pornographic in nature and exclusively targeted women.
More recently, the company indicated that the number of deepfake videos online doubles roughly every 6 months. Things are advancing so quickly that today there are even platforms and simple mobile application with deepfake technology.
Georgio Patrini, CEO of Deeptrace, in his foreword to the company’s report calls for greater concentration and urgency in the development of deepfake countermeasures. We are entering a world in which “our historical belief that video and audio are reliable records of reality is no longer tenable,” Patrini says.
Regarding new technologies, Silicon Valley players and big tech giants have become infamous for their suspect optimism — but perhaps there are times when pessimism is the best course of action.