Deepfake Cyberbullying in Schools: A Growing Concern

0
11

Key Takeaways

  • The use of artificial intelligence (AI) to create deepfakes, including nude images and videos of classmates, is a growing problem in schools.
  • At least half of the states in the US have enacted legislation addressing the use of generative AI to create deepfakes, including simulated child sexual abuse material.
  • Experts recommend that schools update their policies on AI-generated deepfakes and educate students and parents about the issue.
  • The trauma caused by AI deepfakes can be particularly harmful, leading to depression, anxiety, and feelings of shame and vulnerability.
  • Parents are encouraged to talk to their children about deepfakes and establish a safe and supportive environment where they can report incidents without fear of repercussions.

Introduction to the Problem
The increasing use of artificial intelligence (AI) to create deepfakes, including nude images and videos of classmates, has become a significant concern for schools. As Lafourche Parish Sheriff Craig Webre noted, "the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience." This has led to a surge in incidents, with the National Center for Missing and Exploited Children reporting a significant increase in AI-generated child sexual abuse images, from 4,700 in 2023 to 440,000 in just the first six months of 2025.

Legislative Response
In response to the growing problem, many states have enacted legislation addressing the use of generative AI to create deepfakes. According to the National Conference of State Legislatures, at least half of the states have enacted legislation, with some laws specifically addressing simulated child sexual abuse material. As Republican state Sen. Patrick Connick, who authored Louisiana’s law, noted, "this incident highlights a serious concern that all parents should address with their children." The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state’s new law.

The Evolution of Deepfakes
Deepfakes started as a way to humiliate political opponents and young starlets, but the technology has evolved to the point where anyone can create realistic images and videos with minimal technical expertise. Sergio Alexander, a research associate at Texas Christian University, noted that "now, you can do it on an app, you can download it on social media, and you don’t have to have any technical expertise whatsoever." This has made it easier for students to create and disseminate deepfakes, often with devastating consequences for the victims.

Schools’ Response
Experts fear that schools are not doing enough to address the issue of AI-generated deepfakes. Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and educate students and parents about the issue. He noted that "many parents assume that schools are addressing the issue when they aren’t" and that "we hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn’t happening amongst their youth." Schools need to take a proactive approach to addressing the issue and providing support to victims.

The Trauma of AI Deepfakes
The trauma caused by AI deepfakes can be particularly harmful, leading to depression, anxiety, and feelings of shame and vulnerability. Alexander noted that "they literally shut down because it makes it feel like, you know, there’s no way they can even prove that this is not real — because it does look 100% real." The cycle of trauma can be prolonged as the images and videos continue to resurface, making it essential for schools and parents to provide ongoing support to victims.

Parental Involvement
Parents play a crucial role in addressing the issue of AI-generated deepfakes. Alexander recommends that parents start a conversation with their children by casually asking them if they’ve seen any funny fake videos online. From there, parents can ask their kids to think about how they would feel if they were in a similar situation and if they know anyone who has been a victim of deepfakes. Laura Tierney, the founder and CEO of The Social Institute, uses the acronym SHIELD as a roadmap for how to respond to deepfakes, emphasizing the importance of stopping the spread of the images, huddling with a trusted adult, and limiting social media access. By establishing a safe and supportive environment, parents can help their children feel comfortable reporting incidents and seeking help.

https://www.pbs.org/newshour/education/ap-report-rise-of-deepfake-cyberbullying-poses-a-growing-problem-for-schools

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here