A.I.101 Part #4: Ensuring a Responsible Approach

Part 2 - Data & Privacy

Only editable by group admins

  • Last updated December 18, 2023 at 12:21 PM by sweethometc
  • Evidence visible to public
You will begin to recognize that data and privacy are vulnerable and AI is not accurate and developed because of human bias.

In This Task…

You will begin to recognize that data and privacy are vulnerable and AI is not accurate and developed because of human bias.

Your Task…

  1. Watch the video below from (07:00 - 37:05)
    1. (7:00) Data & Privacy: Personal Data
      1. You may wish to further explore: FERPA and/or COPPA
    2. (10:40) Data & Privacy: Copyright
      1. Consider what is “creative” and what should be protected by copyright?
    3. (13:12) Educational Guardrails
    4. (17:56) Misinformation & AI Fiction
    5. (25:57) Algorithmic Bias
    6. (35:03) Final thoughts
    7. (36:04) Conclusion


Evidence of Learning...

In a short paragraph or two, reflect on this task:
  • How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
    • Some items you may wish to explore further:
      • FERPA
      • COPPA
  • AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.

All posted evidence

Evidence Part 2

Being proactive when using AI in the classroom is the best way to mitigate risks for our students. Privacy policies are readily available and should be scanned for information regarding FERPA, COPPA or specific mention of educational use. Teachers should also teach digital citizenship and make sure students apply it to AI as well. Like any tool, students and staff should question the results before believing, or utilizing the information in full. 

AI Fiction is the confidently produced misinformation generated by AI. Because the AI is not a human, it is very possible for this happening. Teaching students to question and be skeptical of information is the best way to support students against AI Fictions. Students should use multiple prompts to question the information that seems off. If this still does not work, information should be sought out in other ways. Using AI to get concrete information or having it do the work for the student should not be the mindset. Instead, using thoughtful prompts to have its support is the best approach for educational use.

Algorithm bias refers to the repeated mistakes in an AI system that lead to unfair results, like favoring one group of people over others. To combat these biases and misinformation, we can use the same approaches as we do against AI Fictions. In addition, students should have access to other sources of information like quality search engines and databases. 
kelly-gravel Over 1 year ago

Part 2- Data and Privacy...

1. What are AI Fictions?...
Sometimes AI systems can confidently produce text that sounds very real but is actually not true. AI systems don't have a true understanding of what they're saying like humans do. So they often can't tell when they're making a mistake. They're certain that their responses are right, even if they're wrong! So again, it's important to communicate this to students that AI isn't always correct. They were designed to be large language models, not knowledge models. This reminds me in how we used to communicate this to our students about Wikipedia. We need to teach our students how to be critical and skeptical thinkers and NOT to believe everything that they ask for and read from AI sources like ChatGPT. Healthy skepticism is a great mindset for our students to practice as they encounter more and more information on the Internet in school and at home, etc. 

2. What is Algorithmic Bias?...
This refers to the 
systemic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Some teaching strategies and practices that we can use to combat these biases and misinformation are the following... Like stated above, we can teach our students to exercise healthy skepticism and to be cautious when asking AI to generate information for them; to re-prompt AI when necessary if something sounds off, to reevaluate by asking if it's sure about the selected topic/subject; emphasizing and teaching digital literacy skills to our students, such as corroborating information, checking for bias, and evaluating the credibility (reliability- which we do with our SEQ's- we discuss audience, purpose, bias, and POV and how it affects the document's use as a reliable source of evidence) of their sources, etc.; getting more creative with assignments (more critical thinking assignments); and by using a variety of tools (different search engines)... Once again, it's all about having open and honest dialogue in our classes with our students; earlier in the year than later.
martjd28 Over 1 year ago

Educate & Empower Students

  • How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
  • Especially in our schools, we need to educate students about the risks associated with sharing personal data.  We need to promote awareness and empowerment for our students.  
  • Parents do have access to their child's educational records through FERPA.
  • AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.
  • AI learns by studying the work of others.  We need to check copyrights.  It's important to develop student's skills as informed users of these AI tools so they can craft their own stance on data ownership.
bonnie-lorentz Over 1 year ago

Evidence

When utilizing AI it is important for teachers to understand data privacy.  Teachers need to see if the programs they are thinking of using were developed for education.  The privacy policies should be scanned to look for age restrictions.  Teachers are also encouraged to look for guides that explain safety and privacy features.  Asking administrators or leaders for guidance on AI might also be helpful. Students need to be informed about data privacy.  They need to know not to share personal information.  They should be taught to understand how ChatBots work and again not to share personal information with them.

AI fiction and bias 
AI mimics human language and it may not be 100% factual.  They are language models, not knowledge models.  AI models are not capable of understanding what they are saying like people can which can result in errors and/or biases.    AI is unable to tell if it is making a mistake.  Search engines can help with fact checking. It is important to evaluate the credibility  of the online sources.  
lwargo Over 1 year ago

Data and Privacy

In a short paragraph or two, reflect on this task:
  • How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
    • Some items you may wish to explore further:
      • FERPA
      • COPPA
  • AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.
To be perfectly frank, I am not certain exactly how to make sure that students are safe in terms of FERPA and COPPA. I think I would need to check with our district technology committee and see if we are allowed to have students work with AI Platforms. If so, which ones? If certain ones are allowed, what are the limits and guidelines? I am fully aware that some of these platforms may be primed to dig into student data, and that it is OUR responsibility as educators to protect our students data and privacy from the tools we place in their hands. 

I found the bit on AI fiction and misinformation fascinating. While watching the two women discuss pride and prejudice, at first GPT gave an incorrect response. What was most interesting and had me on the edge of my seat for a second was when they were able to prompt it again, and it came back with corrected interpretation and quotes. However, my enthusiasm fell when they continued on to point out flaws in each of the pieces of evidence GPT had gathered. From incorrect chapter, to wrong quotes, and quotes that did not support the claim, it seemed that Chat GPT was not up to this task.

Algorithmic bias seems like a challenging issue that is best addressed by having diverse human populations creating the algorithms that are used. A homogeneous group of coders will end up creating biased algorithms however hard they try because no small group of homogeneous people can possibly imagine all the points of view and experience that could come to bear. In fact, no group, except the entirety of humanity, could completely eliminate that problem. Knowing both these extremes, we should strive to do the best we can. Recognize there will always be gaps, so once algorithms are published and activated, we must continue to review them for biases that were missed initially, and then institute solutions, as well as brainstorm future solutions, promoting members who were able to overcome the challegenges and pitfalls that already happened.
john-elliott Almost 2 years ago

Liberta-Part II

AI is amazing technology that people can utilize and fully understand to apply for many tasks and take advantage for a variety of purposes.  But as the video mentions, it is equally important to understand safeguards to protect ourselves from AI getting ahold of our personal data, violating our privacy, and exposing us to misinformation.  Parental consent and research on the teacher’s part to ensure AI tools are FERPA and COPPA affiliated are necessary to ensure student data safety and regulation of access. As an educator, becoming familiar with NYS Ed law on the use of AI is essential in protecting students and ourselves.  Also, increasing student education in the area of responsible use is to me, the greatest safeguard.  Repeating education in digital citizenship on an annual basis (like a yearly seminar/refresher) should be a must for all school districts.  

As a teacher I am always concerned with my students lack of information and knowledge on the amount of access AI and its algorithms can have into their lives.  Education is the first approach that needs to be taken.  Common sense media is a great tool and district initiatives should be taken to consistently and thoroughly keep students informed.  I am also concerned about student access even when they are not specifically on an app or site.  The video discusses the way devices monitor conversations for data collection and so often unknowingly student’s information can be mined at any moment. 
dliberta Almost 2 years ago

Data Privacy

How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
  • Some items you may wish to explore further:
    • FERPA
    • COPPA
    AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.

    The first thing you need to do to combat data privacy is to be informed.  We have to know what we are using and what it collects while using it.  That is the first step to being an informed user with AI.  

    Tools like Common Sense Media help with this.  I use that for many things my own kids ask me about when they want a tech tool or app to download on their phones.  It helps me be informed and allows for the discussions around age restrictions and things like that which most kids don't consider or believe unfair.  This is all part of the larger Digital Citizenship discussions that we need to have with our students.  A comprehensive district approach is needed so that we are discussing this with all of our kids rather than in pockets and expecting all educators to do so in all of their classrooms.

    We also have Ed Law 2D to consider in NYS when it comes to using any tools or apps with students. This gets to the larger topic of not sharing PII.



    AI fiction is when LLM's create inaccurate information.  Since all they are doing in predicting text to create answers, they don't know that they are wrong.  We need to know that these inaccuracies are possible and check for incorrect information. It forces us to use our digital literacy skills.  Having students corroborate information is a great skill that will help them in the future and teach them about how these tools give inaccurate info.

    Algorithmic Bias is what happens when an AI tool consistency skews its results in the same ways.  It seems like awareness that this exists and having discussions about what it may mean is a good starting point with students. 
    brent-peterson Almost 2 years ago

    Part 2

    The video offered many great suggestions on how educators can help mitigate the risks for students surrounding data privacy and misinformation.  The presenters initially suggested seeking local guidance where educators can ask administrators for info. on AI tools.  Their next steps would involve: learning more about state and local laws, scanning privacy policies to see if it includes items surrounding school use, FERPA & COPPA, checking for age restrictions and examining the type of data that is collected.  By ensuring that AI tools have many educational guardrails, we can help promote safe student interaction and responsible use of data.  Some of these guardrails include limiting the # of messages students can receive each day, making chat history visible to parents &/or teachers and monitoring messages. Privacy settings can be adjusted to disable tracking and data storage. We can also empower students by giving them a choice if they would like to engage with this software &/or AI tools or not.
    I definitely agree that there is much work and monitoring that needs to be done when incorporating AI in our classrooms.  Students should continue to be skeptical of the information that is being provided to them.  Educators will need to continue to emphasize digital literacy skills and encourage students to use a variety of sources to complete assignments.  We should continue to be creative with our assignments that encourage original and personal thought.

    The portion of the video that surprised me was on how AI struggled to cite sources and the ongoing errors and problems with Algorithmic Bias.  With teaching Criminal Justice at the HS, I was so intrigued by the discussion of how facial recognition could lead to wrongly identifying suspects and often leads to racial biases. This is definitely a topic of growing concern that will need to be discussed in the future.
    cutzig Almost 2 years ago