When news spread that Lancaster Country Day School students used artificial intelligence to create over 300 nude images of their classmates, state Sen. Tracy Pennycuick, R-Montgomery, said she realized new laws were needed. The problem, Pennycuick would learn, was that the laws regarding child sexual abuse material (CSAM) and revenge porn were written before AI gained popularity.
“We knew that we needed to start putting laws in place, because we were hearing from the DA that it wasn’t illegal, because when the law was written, AI wasn’t a thing,” Pennycuick said. “So that was a loophole in the law, and our CSAM bill closed that loophole.”
The two perpetrators ultimately pleaded guilty to charges of manufacturing CSAM and criminal conspiracy.
The case, although jarring, isn’t unique to the commonwealth or the country. Researchers estimate that deepfakes — meaning images, audio, or videos edited or generated by AI — have increased more than fivefold since 2019, with studies showing that over 90% of pornographic deepfakes depict women and girls. Local organizations formed to support victims of CSAM and revenge porn say they’ve been hit with an influx of reports from victims of explicit deepfakes. Legislators are scrambling to play catch-up with the rapidly developing technology to protect their constituents from having their digital likeness weaponized against them.
In 2024 and 2025, Pennsylvania joined more than 40 other states in legislating against non-consensual deepfakes, by passing laws prohibiting the use of AI to create materials that depict minors in sexually abusive acts and the dissemination of deepfakes with malicious intentions. Experts told Public Source that there’s a need to educate the public on newly passed laws, and for updated policies within schools to prevent the spread and distribution of harmful deepfakes.
State law doesn’t yet address mandated reporters
Pennycuick, who chairs the Senate Communications and Technology Committee, co-sponsored Act 125 and Act 35, following prompting from the Pennsylvania District Attorneys Association. Passed in 2024 and 2025, the laws respectively prohibit the use of AI to create depictions of minors engaging in sexually abusive acts, and bar the creation and dissemination of deepfakes with the intention to mislead or cause harm.
These, Pennycuick said, still left another concern raised by the Lancaster incident: School administrators failed to act immediately or fervently when they became aware of the imagery.
“Those poor young ladies were victimized repeatedly, so the school knew about the pictures, and they are mandatory reporters, and they did not report,” Pennycuick said. “They waited six months until the young men … had created pornographic movies of the young girls. So during that time frame, those pictures were still out there, being circulated among their peers.”
Read more
AG, lawmakers in Pa. contend with Trump order blocking state AI regulations
Pennycuick also cited a case in Bucks County, where a student used AI to create pornographic images of his classmates.
Pennycuick also introduced Senate Bill 1050, which would require all mandated reporters to notify authorities when they suspect AI-generated CSAM. Mandated reporters are workers who have frequent contact with children — such as teachers and medical professionals — who are required to report suspected abuse. The bill won unanimous approval in the Senate but has been awaiting House Judiciary Committee action since November.
“We want to make sure that as soon as an adult knows about it, it stops, it’s removed from the internet, and the girls can hopefully heal and move on past the victimization,” Pennycuick said.
Federal law has loopholes
Nationally, the Take it Down Act of 2025 made it illegal to create nonconsensual intimate images through AI tools or real-world methods. The federal law also required social media platforms to remove such images within 48 hours of receiving a report of them, starting this month.
Amy Groff of law firm K & L Gates’ Cyber Civil Rights Legal Project considers the Take It Down Act’s passage a significant turning point for victims as the first federal effort to explicitly address deepfakes, specifically the requirement that reported content must be swiftly taken down.
“That was certainly a significant change in the legal landscape at the federal level, and that has both penalty provisions for sharing these images without consent, and also provides a requirement for certain platforms to have a process, and to actually take down those images when they receive a request to do so within 48 hours,” Groff said.
Read more
Ten years into their battle against revenge porn, K&L Gates lawyers brace for deepfakes
“In my experience, the victims who had deepfakes — intimate deepfakes of them — circulated or posted, I think the harm that they feel is very real and is very similar to what victims who’ve had actual photographs or videos shared experience,” Groff said.
Still, some advocates expressed concern that the law has too many loopholes. The Cyber Civil Rights Initiative, for instance, warned that the law could “benefit unscrupulous platforms” seeking to use false reports to ensnare competitors and give victims false assurances of redress.
“The Take It Down Act’s removal provision has been presented as a virtual guarantee to victims that nonconsensual intimate visual depictions of them will be removed from websites and online services within 48 hours,” the Cyber Civil Rights Initiative said in a statement. “But given the lack of any safeguards against false reports, the arbitrarily selective definition of covered platforms, and the broad enforcement discretion given to the FTC with no avenue for individual redress and vindication, this is an unrealistic promise.”
Ari Lightman, a digital media and marketing professor at Carnegie Mellon University, cautioned that the law seems to be more of a deterrent than something that offers victims restitution. Lightman pointed to instances in which citizens have sued companies such as Meta for hosting content harmful to children, but said he worried that any fine or payment awarded through the court system might be too insignificant to prompt companies to make long-term policy changes.
Lightman added that it’s challenging to remove a person’s digital likeness from the internet. Sergio Alexander, a doctoral student at Texas Christian University studying AI deepfakes, agreed, adding that screenshots can be taken, videos can be downloaded, and a person might not know of every platform a deepfake has been uploaded to. Replicas could also appear on platforms in countries that don’t adhere to U.S. laws.
Laws + education = lasting change
Even with new laws, state and national advocates and experts worry that educators aren’t sufficiently informed about AI to support their students, nor are they prioritizing teaching ethical uses of technology.
Leslie Slingsby, CEO of services and operations of the Mission Kids Child Advocacy Center in Montgomery County, remembers when teens and parents alike needed to be educated about cell phone etiquette and the laws around taking and distributing nude photos. She suspects that, if polled now, the average teen would know that taking and distributing explicit photos of classmates is both unethical and illegal, but she is less confident they’d answer similarly about explicit deepfakes.
Read More
Teachers learning to use — not fear — AI
“We need to talk to our kids and parents in the state of Pennsylvania so that they understand the consequences of their actions and that professionals have clear guidance on how they’re supposed to respond if they’re made aware of any explicit deepfakes,” Slingsby said.
The center lobbies for legislation that aids in the protection of children, and it got behind Pennycuick’s bill calling for changes to mandated reporting. Slingsby feels that the bill would address a gap in mandated reporting laws, but she’s concerned that some mandated reporters are still ignorant of or confused about their responsibilities, should they become aware that an explicit deepfake was made of a child in their care.
“I think that there could be some easy misunderstanding or misinterpretation of the law, because it’s not very explicit in how it’s described” without the clarification provided by the senate bill, Slingsby said. “I don’t know [if] all of our mandated reporters understand that and that they have a responsibility.”
Slingsby said young people would be better served if approaches to dealing with explicit deepfakes focused more on prevention and education for children and adults alike. In Slingsby’s experience, parents or students become concerned about explicit deepfakes after they’ve found themselves the subject of one. For educators, Slingsby lamented the lack of official guidance.
Help and Resources
- Cyber Civil Rights Initiative: A nonprofit founded to combat online abuse. To report abuse or reach the helpline, call 1-844-878-2274.
- K & L Gates Cyber Civil Rights Legal Project: Offers pro bono legal help to victims of nonconsensual pornography.
- DeepFake-O-Meter: A free, open-source platform developed by the University of Buffalo to detect AI-generated media.
- CyberTipline: A national centralized reporting system for suspected incidents of child sexual exploitation. To report a case, call 1-800-843-5678.
- Take It Down: A free service operated by the National Center for Missing & Exploited Children to remove explicit images of a person taken when they were minors.
- Stop Non-Consensual Intimate Image Abuse: A tool developed by the Revenge Porn Helpline to help adults find and prevent the spread of non-consensual intimate images of them online.
“It has not been incorporated yet in any of the state-approved trainings on mandated reporting,” Slingsby said. “I think that overall, we’re not doing a great job of communicating this new … technology and what adults’ responsibilities are in being aware of the technology and reporting … any explicit deepfakes made regarding children with this.”
Alexander supports prevention-focused approaches, but reflecting on his prior role as a public school teacher, Alexander said that most schools can be slow to enact policies related to technology.
“Updating policies is a crucial step, because at least now you’re recognizing that this is an issue and you put it in writing,” Alexander said.
Pittsburgh Public Schools did not respond to a request for comment on whether it had an AI policy. However, the district’s 2025-26 Code of Conduct does prohibit cyberbullying, which occurs inside or outside of school and is “severe, persistent or pervasive and has the intent or effect of: creating an intimidating or hostile environment that substantially interferes with a student’s education; or physically, emotionally or mentally harming a student.”
The Allegheny Intermediate Unit, which assists districts outside of Pittsburgh but within the county, is developing an AI policy for presentation to its board later this year, according to a spokesperson.
Local educators can look to the West Coast for a potential student-led model for spreading understanding of deepfakes and existing protections.
The Center for Gender Equitable AI is taking a student-led approach to curb the spread of explicit deepfakes in its recent STOP Initiative, for “Say something, Take it down, Offer support, Provide recourse.” Led by Oregon high school students Julianne Huang and Richa Pandit, the initiative offers training and advice on school policy reform, and aims to spread awareness about AI-explicit deepfakes.
Huang and Pandit offer workshops and a toolkit in which they explain the technology, federal laws and ways educators can start conversations about AI literacy. They also advise on changes in their cyberbullying policies that could protect students while adhering to federal and local laws. Huang and Pandit point to resources to support students who’ve already been affected. In the future, the initiative plans to offer virtual workshops to interested schools.
“This is particularly for educators and administrators looking to implement policy,” Huang said, “and from the youth perspective, how they can create systems which will enable people to have better reporting channels … basically have administrators and students be notified of incidents in a more rapid manner.”
As of 2025, 45% of principals report that their districts or schools had policies or guidance on AI use in schools, according to a nationwide study.
Deepfake cyberbullying can be devastating for adolescents, Alexander said, but even when administrators are sympathetic, they’re often at a loss on how to deal with it appropriately. Existing policies tend to follow the traditions of the past and focus on punishment, and while accountability matters, Alexander said, punishment alone doesn’t get to the root of the problem.
“You can punish the person who’s created all that stuff after the fact, but the damage is already done,” Alexander said. “And so really, the most important thing is schools … actually need to start educating students and, you know, kind of establishing ways to prevent that from happening in the first place.”
Atiya Irvin-Mitchell is a Pittsburgh-based freelance writer and can be reached at airvinmitchell@gmail.com.
This story was fact-checked by Bella Markowitz.




