Melissa Minorczyk

Part 2 - Data & Privacy

Part 2 - Data & Privacy

  • July 3, 2025 at 8:39 AM
  • Visible to public
As AI becomes more prevalent in education, teachers will face more challenges about how they use they integrate AI while mitigating the associated risks. FERPA protects students educational records and teachers are responsible for ensuring that any AI tool used in the classroom doesn't collect, share or store PII. Teachers as they are in the front lines of education have a responsibility to ensure that platforms, apps and resources used are compliant with IT protocols. As a district we have the responsibility to "vet" AI tools before adopting them into the districts IT policy. NY state has the Ed Law 2-d that further governs the protection of student and teacher PII by educational agencies and third party contractors and the outcome of this requirement is that our district has a list of approved vendors, apps and websites for use in our classrooms. 

With respects to mitigating misinformation it's important that teachers help instill the thought that critical thinking and source evaluation must be completed when we rely on AI. Students should be encouraged to not only question what they are provided but to also verify what is presented by cross-referencing with credible sources or websites that provide fact-checking capabilities. With open conversations students will better understand that AI is valuable tool and that while it comes with great benefits, it has ethical considerations and limitations (as it's currently available) .

AI fiction is what happens when AI models start to generate information that can sound plausible but its actually made-up or subtly skewed to reflect a sterotype. AI would be presenting false information as facts and while it's not trying to be deceptive as it has no intent. In the videos watched we have seen that AI quotes information from the text referenced but does so incorrectly. 

Algorithmic bias is what causes the AI fiction and often it's invisible. AI based on how it was trained, starts to make decisions or generates new content that reflects the bias present based on the original data. So an example might be if AI was trained with texts written by one specific demographic, it may not accurately represent the other demographics. 

Teachers and students are aware that AI is a great resource. We need to approach these topics from the standpoint we took on digital literacy but go deeper as we need to explain how AI works and that it's based on patterns. Students already understand that they can't blindly trust what is found on the internet and that applies to AI too. When integrating AI into the classroom we need to ensure we drive home the thought that AI is not going to replace a student's need to think rather it will drive student's to think more critically - as they will need to verify what has been presented, look for alternative perspectives, acknowledge when and how they use AI (whether it be to brainstorm ideas or review drafts).