Joshua Lukose 4/12/2025
In recent years, schools have begun implementing AI into their surveillance tools when monitoring students. These tools include face recognition, keystroke tracking, and behavior analysis. They flag students for certain actions, and these decisions are powered by AI. Supporters believe that this can help with the safety and accountability of students, as well as preventing cyberbullying. Some also argue that it allows schools to intervene earlier in cases of mental health struggles or dangerous behavior. Others disagree, however, saying that this kind of data collection is nonconsensual.
As mentioned, these tools collect large amounts of student data, doing so without explicit consent. In many cases, even teachers and staff are unsure of how this data is collected or used. For example, some schools in California began using facial recognition technology on students through their surveillance, but the parents weren’t notified of this activity. This raises the question of whether the students themselves knew their data was being collected. Additionally, these kinds of tools aren’t perfect, as they’re prone to flagging innocent content as threats. AI often lacks the context of situations, leading it to misjudge what happens.
AI can also contain certain biases, especially with facial recognition. Tools might have a racial bias, possibly flagging students of color more frequently. These kinds of mistakes can result in unfair punishments for certain groups of people. Also, storing such student data becomes a privacy concern, as it increases the risk of cyberattacks on schools.
Some states are trying to limit or ban these AI-powered surveillance tools in schools because of these faults and privacy concerns. Civil liberties groups like ACLU argue that safety shouldn’t override student rights, saying that this kind of surveillance is an intrusion on student privacy. Despite ongoing measures to prevent it, the topic of whether AI surveillance should be allowed is still up for debate, with the question of how to balance safety and privacy.
Many schools face difficult decisions when considering using AI. As these tools become more advanced and easier to implement, schools must consider: How much monitoring is too much? What is considered a threat? What does this mean for the student’s mental health, sense of trust, and overall school experience in the long term?