Deepfake porn: what to do next if someone does one of you

When explicit deepfake images of Taylor Swift recently started going viral on X (formerly Twitter)the platform eventually deleted the original poster’s account, then made the pop star’s name impossible to searchalthough some search terms still showed pornographic content.

In short, X didn’t know how to keep images off its platform. This does not bode well for the average person who becomes a victim of a non-consensual deepfake image or video.

After all, if social media platforms can’t protect one of the most famous people in the world from deepfake abuse, they certainly can’t guarantee the safety of unknown users, who can’t rely on lawyers, publicists and a fervent fan base for help. .

Adam Dodge, licensed attorney and founder of Ending Technology Abuse (EndTAB)argues that the lack of safeguards, regulation and robust legal protection places victims, who are consistently women, in the position of dealing with the fallout from non-consensual deepfake, explicit or pornographic images and videos that depict them.

Dodge argues that tasking an already traumatized person with these tasks only amplifies the violation they suffered. But unfortunately, doing it yourself is currently the main way to deal with deepfake abuse.

If you are a victim of deepfake abuse, here are six steps you can take to protect yourself:

1. Recognize the harms of deepfake abuse.

Dodge says victims may hear that deepfake pornography doesn’t really harm them because the images or videos aren’t real. He urges victims not to believe this reasoning.

Instead, it views AI image-based abuse as a form of violence, particularly against women. Fake images and videos can damage a woman’s reputation, damage her career prospects, and be used by strangers to harass and intimidate her online and offline. Dealing with their dismissal is also exhausting and emotional. In other words, let’s consider this type of abuse against women as part of a spectrum of violence that leads to real trauma.

Before a victim begins the arduous process of dealing with deepfakes, Dodge recommends that they take a moment to note that what happened is not their fault and to validate what they are experiencing.

“Recognizing the harm is really important for the victim, and for the people who support them, and for the people who create and share these things, so that they understand that this is a deeply violent and harmful act. “

There are also resources to help support victims. The US-based company Cybersecurity Civil Rights Initiative has an image abuse hotline, as well as a detailed guide on what to do once you become a victim. In the United Kingdom, victims can contact Revenge Porn Helplinewhich helps victims of intimate image abuse.

2. Gather evidence by documenting the content.

Currently, Dodge says the majority of AI image-based abuse occurs primarily through two mediums.

One type is perpetrated through apps that allow users to take an existing image of someone and turn it into a fake nude using the app’s AI-powered algorithm.

The second type of abuse is generated by deepfake face-swapping apps, which can superimpose a person’s face onto a pre-existing pornographic image or video. Although fake, the resulting image or video is surprisingly realistic.

A growing type of abuse can be attributed to text and image generators, which can turn written prompts into fake nude or explicit images. (Mashable does not publish the names of these apps due to concerns about making them more widely known to authors.)

Regardless of the format used, victims should do their best to document each instance of AI image-based abuse via screenshots or by saving image and video files. These screenshots or files may be used in takedown requests and legal action where possible. For a step-by-step guide to documenting evidence, consult the Cyber ​​Civil Rights Initiative guide.

Yet gathering this evidence can further traumatize victims, which is why Dodge recommends that they enlist a “circle of support” to do this work.

“If (victims) are going to report it, it’s really critical to have evidence,” Dodge says.

3. Send takedown notices to the platforms where the content appears.

Social media platforms allow people to report when a user has posted non-consensual images of them online. Historically, these takedown requests have been used to help victims whose real intimate images were being shared without permission. But Dodge says victims of AI image-based abuse can also use the tool.

Each platform has its own process. For a complete list of online removal policies for major apps, social media platforms and dating sites, visit Guide to the Cyber ​​Civil Rights Initiative.

Dodge also recommends the free tool offered by StopNCII.org, a non-profit organization that supports victims of non-consensual intimate image abuse. The organization’s tool allows victims to select an image or video of them that was shared without their consent and independently generate a digital fingerprint, or hash, in order to report that content. The user does not need to download the image or video itself, so it never leaves the victim’s possession.

The organization then shares the hash with its partners, including companies like Facebook, Reddit and TikTok. In turn, its partners are then ready to detect the content corresponding to the generated digital footprint. The company removes all matches within its platform, if necessary.

4. Make requests to deindex images and videos from search engines.

Bing and Google allow users to submit requests to deindex fake and non-consensual pornographic images and videos from their search results. Dodge recommends that victims use this strategy to limit the possibility of discovering abuse based on AI images.

Google’s step-by-step process can be found here. Instructions for the same process on Bing are here.

It is important to make these requests specifically to each business. This month, NBC News found that Google and Bing search results surfaced non-consensual deepfake porn in response to some queries, raising questions about how often the companies patrolled their indexes for this content in order to remove it.

5. Research your legal options.

As of 2021, more than a dozen states, including California, Texas and New York, had laws relating to deepfake images, according to the Cybersecurity Civil Rights Initiative. If you live in a state where laws prohibit the creation of deepfake pornography or AI image-based abuse, you may be able to file a police report or sue the perpetrator. Internationally, sharing deepfake porn has become a crime in England and Wales.

Even in the many U.S. states that don’t prohibit this type of abuse, Dodge says there are other related laws that may apply to a victim’s case, including cyberstalking, extortion and child pornography .

Yet Dodge says many police departments are unprepared and lack the resources and personnel to investigate these cases. It is therefore important to manage expectations about what is possible. Additionally, some victims, particularly those who are already marginalized in some way, may choose not to report non-consensual deepfakes to authorities for a variety of reasons, including lack of trust in law enforcement. order.

6. Opt out of data broker sites.

Dodge says victims of non-consensual intimate images are sometimes targeted by strangers online, if their personal information is linked to the content.

Although it hasn’t happened yet, Dodge recommends opting out of data brokerage sites that collect your personal information and sell it to anyone for a fee. These brokers include companies like Spokeo, PeekYou, PeopleSmart and BeenVerified. Victims will need to contact each broker to request deletion of their personal information, although a service like DeleteMe can monitor and delete this data for a fee. DeleteMe charges a minimum of $129 for an annual subscription, which searches and deletes personal information every three months.

Google also has a free tool to remove certain personal information from its search results.

Given how quickly AI-based imaging tools are proliferating, Dodge cannot imagine a future without AI-generated, non-consensual explicit images.

Until a few years ago, committing such abuses required computing power, time and technical expertise, he notes. Now these tools are easy to access and use.

“We couldn’t make it easier,” Dodge says.

Scroll to Top