It often happens that the exact extent of certain phenomena is only understood when they involve a famous person. This is the case with the recent deepfake involving Taylor Swift, with fake images in sexually explicit poses of the artist being shared on X and viewed by millions of users until the platform decided to block searches dedicated to the American singer. The dissemination of non-real photographs and clips created through artificial intelligence and without the consent of the person concerned is a technique that combines and overlaps images. It is not new and has often been used to package pornographic videos capturing various celebrities. But it can also be exploited to create fake news, scams, engage in cyberbullying, and cybercrimes.
While it is true that cybercriminals mostly target well-known faces, with the rapid growth and availability of generative artificial intelligence solutions, deepfake is set to see a significant increase, as even those who are not very computer-savvy will be able to quickly and easily package potentially dangerous clips. For this reason, it is important to know the technique and understand how to defend yourself if you become a victim.
Pending final and shared regulations, victims have to fend for themselves. And since it is almost always women who are affected, it is good to know how to move well to avoid expanding the trauma triggered by the abuse suffered. Outlining the do-it-yourself, thus affordable, defence strategy is Adam Dodge, attorney and founder of Ending Tech-Enabled Abuse, who says the first step is to recognize that you are a victim of abuse.
As explained to Mashable, Dodge argues that in many cases, the common thinking is that deepfake does not harm the victim because it is based on fabricated images. It is as if to say that the images are not worth worrying about if they are not real. This is a misconception since, even if artfully created, these types of content ruin the victim’s reputation. Just imagine, to give an example, the consequences in the professional sphere.
Another important psychological element is to be aware that those who suffer such abuse are not at fault, as is often the case. In this sense, more and more associations and programs are coming to rescue victims, with protocols and useful initiatives to overcome the trauma.
Gather evidence, remove content and take legal action
While difficult, it is important to gather evidence of the abuse suffered. On a general level, there are two modes used by perpetrators. The first is the use of face-swap applications that allow one person’s face to be superimposed on the naked body of another, creating pornographic images or clips. The second makes use of actual images of the intended victim, transforming the content into nudity using specific tools that leverage artificial intelligence. Saving images and videos demonstrating deepfake allows one to request the removal of the offending content and take legal action to defend oneself. One must be aware, however, that collecting evidence can have negative effects on the victim, weighing down the trauma he or she is experiencing at the time. In this case, getting help from friends and trusted people may be helpful.
As much as each platform has a different policy, the necessary step after collecting evidence is to ask for the removal of images and videos shared without consent. There are useful and effective tools that make the job easier, often available through organizations that offer support to victims of digital abuse, calling up content to be removed on various sites and social media at once.
Google and Bing deserve a separate discussion, as the search engines are the starting point that snoopers use to track down pornographic images and clips. In this case, one must request the de-indexing of false and personally offensive content, following what is prescribed by the procedures of the two search engines.
The rapid spread of deepfakes has convinced several countries to regulate the abuse. This means that where laws are in place, victims can take legal action to prosecute the perpetrator. In England, there is a law against deepfake. At the same time, the EU’s Artificial Intelligent Act is the most comprehensive bill on the subject at the moment, which almost completely bans the use of the technique, regulated through transparency obligations imposed on the content creator.
Last but not least, action to prevent the victim from reliving the nightmare again in the future, especially with online offences by strangers who discover the crime suffered previously, is to take advantage of services that can monitor sites that aggregate personal data and news, eventually merging the abuse suffered with generalities. Google provides a specific tool for removing personal information from search results for free, or there are paid services such as DeleteMe.