Deepfakes porn: how to talk to your children about fake explicit images

If the day hasn’t come yet, it’s coming: you need to talk to your child about explicit deepfakes.

The problem might have seemed abstract until artificial intelligence-generated fake pornographic images of Taylor Swift went viral on social media platform X/Twitter. Now the problem simply cannot be ignored, say online child safety experts.

“When this happens to (Swift), I think kids and parents start to realize that no one is safe from this,” says Laura Ordoñez, editor-in-chief and head of digital media and family at Common Sense Media.

Whether it’s explaining the concept of deepfakes and AI-based abuse, talking about the pain these images cause victims, or helping your child develop the critical thinking skills needed to take ethical decisions regarding deepfakes, parents can and should address many topics. in ongoing conversations on the subject.

Before you get started, here’s what you need to know:

1. You don’t need to be an expert on deepfakes to talk about it.

Adam Dodge, founder of The tech-savvy parentsays parents who feel the need to fully understand deepfakes before having a conversation with their child don’t have to worry about looking like or becoming an expert.

Instead, all that is needed is a basic understanding of the concept that AI-based software and algorithms make it incredibly easy to create realistic explicit or pornographic deepfakes, and that such technology is easy to online access. In fact, children as young as elementary school students can encounter apps or software with this capability and use them to create deepfakes with few technical challenges or obstacles.

“What I tell parents is, ‘Look, you need to understand how often and how much kids are exposed to this technology, that it’s happening sooner than you think, and understand how much it’s dangerous.'”

Dodge says parents need to be prepared to face the possibility that their child will be targeted by technology; that they will see inappropriate content; or that they will participate in creating or sharing false, explicit images.

2. Make it a conversation, not a lecture.

If these possibilities worry you enough, try to avoid jumping into a hasty discussion about deepfakes. Instead, Ordoñez recommends approaching the topic in an open, nonjudgmental way, asking your child what they know or have heard about deepfakes.

She adds that it’s important to consider AI image-based abuse as a form of online manipulation that exists on the same spectrum as misinformation or disinformation. In this context, thinking about deepfakes becomes an exercise in critical thinking.

Ordoñez says parents can help their child learn the signs that images have been manipulated. Although the rapid evolution of AI means that some of these telltale signs no longer appear, Ordoñez says it’s still worth pointing out that any deepfake (not just the explicit kind) can be identifiable by facial discoloration , lighting that seems off and a blur where. the neck and hair come together.

Parents can also learn alongside their child, Ordoñez says. This may involve reading and talking about fake, non-explicit AI-generated content together, like the song Have the heart on the hand, released in May 2023, which claims to use AI versions of voices by Drake and The Weeknd. Although this story has relatively low stakes for children, it can spark a meaningful conversation about how it might feel if your voice was used without your consent.

Parents could take an online quiz with their child asking the participant to correctly identify which face is real and which is AI-generated, another low-stakes way of together confronting the ease with which AI-generated images can mislead the viewer.

The goal of these activities is to teach your child how to start an ongoing dialogue and develop critical thinking skills that will surely be put to the test when they encounter explicit deepfakes and the technology that creates them.

3. Put your child’s curiosity about deepfakes in the right context.

Even though explicit deepfakes constitute digital abuse and violence against the victim, your child may not fully understand it. Instead, they might be curious about the technology, and even eager to try it.

Dodge says that while this is understandable, parents regularly place reasonable limits on their children’s curiosity. Alcohol, for example, is kept out of their reach. R-rated films are banned until they reach a certain age. They are not allowed to drive without proper instruction and experience.

Parents should think about deepfake technology in the same way, Dodge says: “You don’t want to punish kids for their curiosity, but if they have unfiltered access to the Internet and artificial intelligence, that curiosity is going to lead them down dangerous roads.”

4. Help your child explore the consequences of deepfakes.

Children may view non-explicit deepfakes as a form of entertainment. Tweens and teens may even falsely believe the argument made by some: that pornographic deepfakes are not harmful because they are not real.

Nonetheless, they can be persuaded to view explicit deepfakes as AI image-based abuse when the discussion incorporates concepts such as consent, empathy, kindness, and bullying. Dodge says invoking these ideas while discussing deepfakes can focus a child’s attention on the victim.

If, for example, a teenager knows to ask permission before taking a physical object from a friend or classmate, the same goes for digital objects, such as photos and videos posted on social media. Using these digital files to create a nude deepfake of someone else is not a joke or a harmless experiment, but a kind of theft that can lead to deep suffering for the victim.

Similarly, Dodge asserts that just as a young person would not attack someone on the street out of the blue, it does not align with his values ​​to attack someone virtually.

“These victims are neither fabricated nor fake,” Dodge says. “They are real people.”

Women, in particularhave been targeted by technology which creates explicit deepfakes.

In general, Ordoñez says parents can talk about what it means to be a good digital citizen, helping their child think about whether it’s appropriate to mislead people, the consequences of deepfakes, and how Seeing or being a victim of the images could make others feel.

5. Model the behavior you want to see.

Ordoñez notes that adults, including parents, are not immune from enthusiastically participating in the latest digital trend without thinking through its implications. Take, for example, how quickly adults started doing cool things. AI self-portraits using the Lensa app late 2022. Beyond the hype, there were significant concerns about privacy, user rights, and the app’s potential to steal or displace artists.

Moments like these are the perfect time for parents to reflect on their own digital practices and model the behavior they would like their children to adopt, Ordoñez says. When parents take the time to think critically about their online choices and share learnings from that experience with their child, it demonstrates how they can take the same approach.

6. Use parental controls, but don’t rely on them.

When parents hear about the dangers of deepfakes, Ordoñez says they often want a “silver bullet” to keep their child away from apps and software that deploy this technology.

It’s important to use parental controls that restrict access to certain downloads and sites, Dodge says. However, these controls are not foolproof. Children can and will find a way around these restrictions, even if they don’t realize what they are doing.

Additionally, Dodge says a child may see deepfakes or encounter the technology at a friend’s house or on someone else’s mobile device. That’s why it’s still critical to have conversations about AI image-based abuse, “even if we impose strong restrictions through parental controls or remove devices at night,” Dodge says.

7. Empower instead of scare.

The prospect of your child harming their peer by abusing the AI ​​image, or becoming a victim themselves, is frightening. But Ordoñez cautions against using scare tactics to discourage a child or teen from interacting with technology and content.

When speaking to young girls, in particular, whose photos and videos on social media could be used to generate explicit deepfakes, Ordoñez suggests talking to them about what it feels like to post images of themselves— same and potential risks. These conversations should not place blame on girls who want to participate in social media. However, talking about risks can help girls think about their own privacy settings.

While there’s no guarantee that a photo or video of them won’t be used against them at some point, they can feel empowered by making intentional choices about what they share.

And all adolescents and adolescents can benefit from knowing that encountering technology capable of creating explicit deepfakes, at a developmental period when they are very vulnerable to making rash decisions, can lead to choices that seriously harm others , explains Ordoñez.

Encouraging young people to learn to step back and ask themselves how they are feeling before doing something like creating a deepfake can make a huge difference.

“When you step back, (our children) have that awareness, they just need to be empowered and supported and guided in the right direction,” Ordoñez says.

Scroll to Top