A.I.101 Part #4: Ensuring a Responsible Approach

Part 2 - Data & Privacy

Only editable by group admins

  • Last updated December 18, 2023 at 12:21 PM by sweethometc
  • Evidence visible to public
You will begin to recognize that data and privacy are vulnerable and AI is not accurate and developed because of human bias.

In This Task…

You will begin to recognize that data and privacy are vulnerable and AI is not accurate and developed because of human bias.

Your Task…

  1. Watch the video below from (07:00 - 37:05)
    1. (7:00) Data & Privacy: Personal Data
      1. You may wish to further explore: FERPA and/or COPPA
    2. (10:40) Data & Privacy: Copyright
      1. Consider what is “creative” and what should be protected by copyright?
    3. (13:12) Educational Guardrails
    4. (17:56) Misinformation & AI Fiction
    5. (25:57) Algorithmic Bias
    6. (35:03) Final thoughts
    7. (36:04) Conclusion


Evidence of Learning...

In a short paragraph or two, reflect on this task:
  • How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
    • Some items you may wish to explore further:
      • FERPA
      • COPPA
  • AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.

All posted evidence

Part 2

We have a responsibility to protect students from AI by understanding and applying privacy laws like FERPA. I think FERPA is basically making sure that student information and educational records are confidential. We should make sure that we avoid sharing students’ personal data like names, grades, addresses, phone numbers, or any other school related personal information. Another major concern is AI misinformation and algorithmic bias. All AI tools, including ChatGPT, can generate inaccurate or misleading information. We need to guide students to question and verify AI content. 
mricupito About 1 month ago

Responsibility

To keep students safe when using AI, I think the biggest thing is awareness. Tools like ChatGPT are great, but they come with risks, especially around data privacy and misinformation or hallucinating. Knowing about laws like FERPA and COPPA helps remind us that student info shouldn’t be shared or stored without care. I also talk with students about how AI sometimes “makes things up” or reflects human bias, since it learns from data created by people.
Algorirthm bias is a big one! When the system’s results lean one way because of the data it was trained on, it can leave out voices or repeat stereotypes without realizing it. I try to encourage my students to question what they see, double-check facts, and remember that AI isn’t always right! The goal isn’t to avoid AI, but to use it wisely and stay mindful of where the information comes from.
naryanp About 1 month ago

Part 2

When using AI in the classroom, I need to protect students from risks like data privacy breaches, misinformation, and algorithmic bias. To stay compliant with FERPA and COPPA, I would avoid using tools that collect personal information or require personal accounts to be made and remind students not to share identifying details. I would also emphasize that AI outputs are sometimes fabricated, what some call AI fiction or AI hallucinations. This is because AI produces results based on prevalence in the data that it's trained on and this can reflect bias. To address this, I’d teach students to verify AI-generated information with reliable sources and discuss how algorithms can favor certain perspectives. By setting these boundaries and promoting critical thinking, AI can be used safely and responsibly as a learning tool.
pawlak-jayna 2 months ago

Part 2

       Mitigating risks related to data privacy, misinformation, and algorithmic bias is crucial for creating a safe learning environment for students.  Following regulations like FERPA (Family Educational Rights and Privacy Act) and COPPA (Children's Online Privacy Protection Act) helps us ensure that student information is handled responsibly.   Educators must be vigilant in protecting sensitive information by using secure platforms and being transparent about data usage.
       The concerns about algorithmic bias in AI tools highlight the need for all of us to critically assess the information these technologies provide.  Algorithmic bias refers to the unfair prejudice embedded within algorithms, which can lead to distorted representations or recommendations based on race, gender, or socioeconomic status.  We need to  guide our students in understanding these potential problems and have them to question the sources and context of the information they come across.  If we promote digital literacy we as educators can give students a sense of responsibility to search the digital landscape responsibly.
jimford75 2 months ago

Part 2 - Data & Privacy

As AI becomes more prevalent in education, teachers will face more challenges about how they use they integrate AI while mitigating the associated risks. FERPA protects students educational records and teachers are responsible for ensuring that any AI tool used in the classroom doesn't collect, share or store PII. Teachers as they are in the front lines of education have a responsibility to ensure that platforms, apps and resources used are compliant with IT protocols. As a district we have the responsibility to "vet" AI tools before adopting them into the districts IT policy. NY state has the Ed Law 2-d that further governs the protection of student and teacher PII by educational agencies and third party contractors and the outcome of this requirement is that our district has a list of approved vendors, apps and websites for use in our classrooms. 

With respects to mitigating misinformation it's important that teachers help instill the thought that critical thinking and source evaluation must be completed when we rely on AI. Students should be encouraged to not only question what they are provided but to also verify what is presented by cross-referencing with credible sources or websites that provide fact-checking capabilities. With open conversations students will better understand that AI is valuable tool and that while it comes with great benefits, it has ethical considerations and limitations (as it's currently available) .

AI fiction is what happens when AI models start to generate information that can sound plausible but its actually made-up or subtly skewed to reflect a sterotype. AI would be presenting false information as facts and while it's not trying to be deceptive as it has no intent. In the videos watched we have seen that AI quotes information from the text referenced but does so incorrectly. 

Algorithmic bias is what causes the AI fiction and often it's invisible. AI based on how it was trained, starts to make decisions or generates new content that reflects the bias present based on the original data. So an example might be if AI was trained with texts written by one specific demographic, it may not accurately represent the other demographics. 

Teachers and students are aware that AI is a great resource. We need to approach these topics from the standpoint we took on digital literacy but go deeper as we need to explain how AI works and that it's based on patterns. Students already understand that they can't blindly trust what is found on the internet and that applies to AI too. When integrating AI into the classroom we need to ensure we drive home the thought that AI is not going to replace a student's need to think rather it will drive student's to think more critically - as they will need to verify what has been presented, look for alternative perspectives, acknowledge when and how they use AI (whether it be to brainstorm ideas or review drafts). 
melissa8 6 months ago

Part 2

As educators, it is crucial that we consistently remind students about these risks. When using AI tools designed for education, both teachers and students must be guided to use them responsibly. The issues of AI mistakes, hallucinations (AI fiction), and bias must also be addressed by verifying information through reliable sources. Sometimes simply consulting multiple search engines can help confirm or dispel misinformation. It is important to acknowledge that AI errors are inevitable, and any information that sounds peculiar should be questioned. Knowing our students well and using performance-based assessments provide a more accurate understanding of their true abilities. It is vital to protect student data rigorously: FERPA requires schools to obtain consent before sharing personal educational records, and COPPA restricts data collection from children under 13 without parental approval. To combat misinformation, educators should teach students to critically evaluate AI-generated content by verifying information through trustworthy sources, encouraging inquiry, and modeling critical thinking skills. Additionally, educators must regularly evaluate AI outputs for fairness, promote diverse perspectives by supporting AI tools developed with inclusive data, and educate students about potential biases, fostering critical awareness of technology’s limitations.
brigid-kennedy 6 months ago

Part 2

Teachers must protect student data when using AI. FERPA and COPPA are laws that help keep student information safe. Teachers should not put personal student data into AI tools, and they should use tools that follow the rules. AI can also create false information or show unfair bias. Teachers should teach students to check facts and ask questions. This helps students think clearly and use AI safely!
emily-balisteri 7 months ago

Part 2

As educators begin to incorporate AI into classrooms, it is important to understand data privacy and ensure students’ information is protected. Before using any AI program, we should look for age restrictions, data collection practices, and compliance with FERPA and COPPA. Students also need to be educated about data privacy, as they should understand that personal information should never be shared with AI tools, and they need to be taught how chatbots work, including their limitations and risks. By fostering digital responsibility, we can help students develop safe habits when interacting with these tools. In addition to privacy concerns, it’s important to remember that AI is not always factually accurate. These tools mimic human language, but they are language models, not knowledge models. AI does not actually understand the information it generates, and it cannot recognize when it makes a mistake. This lack of awareness means errors and biases can easily occur. Educators should emphasize digital literacy by teaching students how to cross-check information using reliable sources and evaluate the credibility of online content.The video also highlighted the ongoing challenge of algorithmic bias, which was particularly eye-opening. AI-driven facial recognition, for example, has the potential to misidentify individuals, and research has shown that these errors disproportionately impact communities of color. Ultimately, while AI has exciting potential, it comes with significant responsibilities. As educators, we need to stay informed, teach students to think critically, and encourage original thought in assignments. By combining strong privacy protections, critical digital literacy instruction, and thoughtful use of AI tools, we can help students navigate this evolving technology safely and responsibly.
ckearney 9 months ago

How can you mitigate risks for your students? Describe what this concern is for AI Fiction and Algorithmic Bias & how to approach it.

AI is everywhere, so we need to embrace it, however, there are pitfalls to using it and teachers need to be aware of them and convey this to students.. For example, when utilizing AI it is important for teachers to understand data privacy.  Teachers need to see if the programs they are thinking of using were developed for education.  The privacy policies should be scanned to look for age restrictions.  Teachers are also encouraged to look for guides that explain safety and privacy features.  Students need to be informed about data privacy.  They need to know not to share personal information.  They should be taught to understand how ChatBots work and again not to share personal information with them. The idea that AI can contain misinformation is also important. Students need to be checking the information with more than one source to ensure they are not using inaccurate information.

AI fiction and bias AI mimics human language and it may not be 100% factual.  As stated in the video, AI are language models, not knowledge models.  AI models are not capable of understanding what they are saying like people can which can result in errors and/or biases.    AI is unable to tell if it is making a mistake.  Search engines can help with fact checking. It is important to evaluate the credibility  of the online sources.
msionko Over 1 year ago

Part 2

AI Fiction –“Healthy skepticism is a great practice” was the big sentence from the video for me. You encounter a lot of things while reading and viewing content. If we see things we already know, and they are true, we might believe everything we encounter is true, almost thinking “I verified the first three things I have read…I’m good with this tool” but a language model can make mistakes, AND it will not know that it has made them.

AI Bias –“If a facial recognition program is trained predominately on images from one ethnic group it may perform poorly on another group” so it learns from our bias or the errors we make by feeding it information to model after. This is problematic to say the least. That is why working collaboratively is important. A second or third set of eyes, and having that healthy skepticism, the “does that sound right?” or “let me check on another search engine.” 

As far as privacy, as a kid that is not at the front of your mind if nothing has ever happened where you yourself have been compromised. FERPA and COPPA are there for us to be aware of and get parental consent for anything not school approved and we need to remind students of potential dangers.
dtracz Over 1 year ago

Part II Data and Privacy

This video was full of needed and excellent information and I felt it was so very nonchalant in its presentation.  It went so quickly almost brushing over all the pitfalls of AI.  True, students are using it and going to keep using it but to thinks that we really can stop their use by just informing them on the pitfalls of data privacy, I think is a big ask.  Kids post stuff all the time without even a thought to who see it or data they may have inadvertently shared.  This said, as educators we need to certainly and consistently remind student about this fact. Beyond this, I think it is good to run things by the IT department in our school.  They should also know about FERPA and COPPA and parent restriction capabilities, etc. If there are products out there that are set for educational use the, yes, teachers and students should learn to use them responsibly.AI mistakes, hallucinations (AI fiction) and bias should also be addressed by following up with other reliable sources of information.  Sometimes just by using a couple of different search engines you can confirm or dispel bad information.   It is important to keep reminding ourselves that mistakes will be unavoidable and that if something sound peculiar there is likely a reason for that. Finally, it is important to address as best we can, any bias that may turn up as well. This can be unfair This results from the data that they are trained on. These repeatable errors based on the algorithm may lead to these unjust outcomes. Like preferring one group over another or when it consistently errors in a direction causing skewed conclusions.  Like other AI mistakes, other sources must be used to confirm the facts form the fiction.  In fact, it was noted that Chat GPT has a left leaning bias. and when detectors are used nonnative speakers, they are almost always flagged as AI made. It is best to view bias errors as another kind of mistake that AI makes and to remind students that AI is not personal but only trying to glean information based on what it has been trained on. Moreover, knowing your students and using performance-based instruction will play a better part in knowing what your students ultimately produce and understand than trying to use any kind of AI detector. 
jduma Over 1 year ago

Part 2 Evidence

How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
It is important prior to using AI tools in the classroom to consider and research if this tool is intended for school use.  You can do this by completing a simple search of the tools privacy policy and see if it is both FERPA and COPPA compliant as well as intended for school use if those are. not included in the privacy policy it's best to consult the IT team. 

AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.
AI fiction is related to the fact that tools like chatGPT are large language models, which mean they were created to mimic language not knowledge.  So when using tools like this it is important just like any other internet or source we need to keep that in mind, we shouldn't blindly trust that the information is factual, in fact a lot of times it can have incorrect information but present it as though it were entirely factual. 

AI also has algorithmic bias it can lead to problems like misidentification of people of color, under representation of people of color for youtube videos and flag writing from english language learners disproportionally.

AI tools are not going away and in fact will only continue to grow in popularity, it is our job as educators to help inform our students of the helpful ways we can use AI but also just as important to teach them about the inconsistencies and inaccuracies they can provide as well.  This will help them to use it properly.   

kielebarbalate Over 1 year ago