School districts across Allegheny County agree on one thing: AI is here to stay. As education-focused AI tools become increasingly common, many districts are working to enact policies and issue guidelines on the use of AI within classrooms.
Traditional AI programs perform tasks that would otherwise require human or animal intelligence by following sequences of rules and algorithms. Social media algorithms, recommendation engines for streaming services like Netflix and voice assistants like Apple’s Siri and Amazon’s Alexa were already widely used before 2022.
Generative AI (gen AI) is a newer innovation that can create novel content. ChatGPT and other popular gen AI tools like Google Gemini and Anthropic’s Claude are built off large language models, meaning they’re trained on vast amounts of data scraped from the internet, including human-written text.
Teachers learning to use — not fear — AI
Research shows middle and high school students and educators are increasingly using gen AI. Common Sense Media’s Dawn of AI study found 40% of teens reported using gen AI for school assignments.
The Center for Democracy & Technology’s (CDT) 2024 report found that 83% of public school teachers for grades six through 12 used AI for personal or school purposes during the 2023-24 school year, up from 51% the prior year.
Schools and their districts are increasingly creating AI policies and training teachers.
Districts across the Pittsburgh region are in varying stages of developing and implementing AI guidelines and policies.
How are regional districts adapting to AI?
Cornell School District’s school board approved its policy last year. Beaver Area School District created guidelines last summer. Woodland Hills and Hempfield Area school districts are still developing policy and guidelines respectively, and Baldwin-Whitehall School District has some guidelines outlined within other policies and resources like the student code of conduct and faculty handbook.
The general attitude among the district leaders Public Source spoke with is that they must train their teachers to prepare students for a world in which AI is becoming ubiquitous.
Kris Hupp, director of technology and instructional innovation at Cornell, said last year some district teachers participated in a professional development “learning pathway” about AI in the classroom, collaborating with Lindsay Forman, the K-12 coordinator from Carnegie Mellon University’s Simon Initiative. Hupp said the teachers concluded that they wanted to create “some sort of structure to teach our students how to use AI.”
“We feel that if we’re not preparing our students to function in a world with AI, we’re not preparing our students for the future,” Hupp said.
At Hempfield Area, deputy superintendent Emily Sanders this spring led a district in-service training on the stoplight strategy for color-coding assignments to demonstrate the many applications of AI in schools.
The Pennsylvania School Boards Association (PSBA) issued in May 2024 a policy guide focused on the seven principles of TeachAI, an educators’ initiative seeking effective and responsible uses of AI in schools.
Sanders said that some school districts are choosing to create guidelines instead of policies, as policies require a long committee process and multiple readings before the school board. Guidelines can be more easily created and tweaked.
What are popular AI classroom tools — and the risks?
Teachers are using a variety of tools to teach, assign classwork and grade assignments of their students. Some popular tools among teachers are:
Robbie Torney, senior director of AI programs at Common Sense Media, said it’s important for districts to have AI policies to give educators and students clarity on what is and isn’t allowed. According to their research, “80% of parents surveyed said that they hadn’t heard from their school about whether their child was allowed to use AI or not. And teens also expressed similar confusion about whether or not they could use AI in their classrooms.”
He added that districts need to train teachers because in some cases using AI can be inadequate or even risky, leading to unintended consequences.
One such risky use case is using AI to create Individualized Education Plans (IEPs). Because AI tools don’t know the students, they can’t reliably capture student needs in their IEPs. Torney and his team found that AI tools’ bias or “invisible influence” can also negatively affect such uses.
“We generated large numbers of behavior plans and aggregated them based on racial or ethnic identities and then analyzed them, and we started to see differences between, say, the behavior plans for students with white-coded female names and Black-coded female names, or white-coded male names and Black-coded male names,” Torney said.
The CDT report found that while schools are increasingly creating policies for and training their teachers in AI use, significant risks remain. More than two-thirds of teachers reported using AI content detection tools that the report and Torney agree are not reliable.
“The technology that underlies them is not effective,” Torney said. “Their false positive and false negative rates are just, frankly, so high that if they were in any other industry, like in medicine, for example, you would just say that tool is unusable. They are easily bypassed with very well-understood tips and tricks that are very easily accessible to students.”
Lajja Mistry is the K-12 education reporter at Pittsburgh’s Public Source. She can be reached at lajja@publicsource.org.
Bella Markovitz is a journalist and former Pittsburgh’s Public Source intern and can be reached at btmarkovitz@gmail.com.
This story was fact-checked by Ember Duke.





