A.I.101 Part #4: Ensuring a Responsible Approach

Part 3 Beyond the Episode

Only editable by group admins

  • Last updated December 18, 2023 at 12:30 PM by sweethometc
  • Evidence visible to public
Use resources provided to extend your learning!

In this Task…

Explore the Resource Link below to review various areas that might interest you:
https://docs.google.com/document/d/1m8kmhdd3BbWhRD48BiEPYjiU61JJy-V80bfge0NTFVg/copy

Your Choice:
  1. Pick ONE area highlighted in the Resource Link (listed below 1-3).  There are a variety of bulleted articles and materials to explore listed at the end of each section.    
    1. Data Privacy (any of the four bulleted items)
    2. AI Fictions (any of the six bulleted items)
    3. Bias (any of the five bulleted items)
  2. After you explore one (1) article or video from the bulleted list (Data Privacy, OR AI Fictions, OR Bias).  Consider what you learn and how it might shape your use of AI in the classroom.

Evidence of Learning...

In a short paragraph or two, reflect on this task:
  • Explain how one of these additional resources may help you use AI in the classroom. 

All posted evidence

Part 3

I think one of the most meaningful ways to have students take ownership of AI at this point is through discussion. I focused on Data Privacy from the Use AI as a conversation starter to get students thinking about their stance on data privacy and how they want to protect their data. Emphasize positive habits when it comes to data privacy. I think it’s important for the students to first set their own limits on AI through discussion, and figure out what it means to set data privacy boundaries. This allows them to understand the why behind data privacy. This type of discussion can then lead to plagiarism, and how we can set limits to the use of AI when it comes to what is an acceptable amount of usage. 
mricupito About 1 month ago

Part 3

I explored the Data Privacy section of the Guide to Ensuring a Responsible Approach to AI, and it really made me think about how much information we give up, usually without realizing it and when using digital tools in the classroom. The takeaway that “more data is better” for companies makes sense because it means our students’ data can easily become part of that collection process. As educators, it’s on us to be aware of privacy settings and actually talk with students about what’s being shared.
This resource gave me some good steps I can apply right away, like letting them make informed choices about when to use certain tools. It also reminded me that privacy shouldn’t feel heavy or hopeless, students can still learn how to protect their data and make smart decisions online. 
naryanp About 1 month ago

Part 3

After reading the article about Google's Bard error at launch, I am reminded that even advanced systems make mistakes or hallucinate facts. That helps me as an educator by reinforcing the idea that AI-generated answers should always be double-checked before being used in lessons or assessments. This is valuable for myself if I use AI to create class notes or assignments as well as my students who may use it to study or complete assignments. In class, I might use the Bard mistake as a case study showing students the AI’s incorrect answer and asking them to find and correct the error using reliable sources. This gives them a concrete example of AI’s fallibility and strengthens their critical thinking around trusting AI outputs. I could find a few other examples as well so they could have repeated practice. 
pawlak-jayna 2 months ago

Part 3

I read the article: Most Tech Companies Profit Off Student Data, Even If They Say Otherwise. As an educator, this report is both alarming and, unfortunately, not entirely surprising. We've been trusting these platforms with our students' digital footprints, often without fully understanding the hidden costs.The distinction between "selling data" and "monetizing behavioral profiles" feels deliberately deceptive. When I choose an educational app for my classroom, I'm thinking about learning outcomes—not whether my students will be followed across the internet with targeted advertisements.The fact that companies bury their true data practices on "page 50" of user agreements highlights a fundamental power imbalance. How many of us realistically have time to parse through lengthy legal documents for every tool we use?  I know I don't and as a teacher we always have way more on our plate that we can handle sometimes.  
jimford75 2 months ago

Part 3 Beyond the Episode

I chose to review the Bias article titled "YouTube’s recommendations drive 70% of what we watch" the article was eye-opening. It breaks down how YouTube uses AI to watch our history, likes and dislikes to continuously feed us content that's similar enough to keep us hooked. Having a profile established with YouTube makes it easier to continue tracking our interests and preferences. 

As a high school business teacher I regularly use YouTube for videos related to content
being taught because I use it to spark student engagement and find new relatable material. I also observe what educational videos grab my students' attention as this helps me identify their interests. 

I see real opportunity to use this AI knowledge with students. I'd discuss how they should be using diverse resources - beyond what theYouTube's recommendations are. I want my students to explore various content creators as its important for students to realize that they need to evaluate why the content is being suggested and determine whether or not the information provided is not only factual but if it presents multiple viewpoints. 

For me as a teacher the most important part would be the safety aspect as I want to make sure that students are aware that YouTube is constantly collecting - what they search, watch, like and dislike. While students might think that they have an understanding on why the information is being collected about them they need to know that it does go beyond targeted ads. Perhaps it will cause my students to be more intentional with their online interactions?  Many students I would suspect don't realize that they can change some of their privacy settings to limit data collection. 

The big takeaway for my students is that I do want them to realize that YouTube is valuable educational resource that can cater to their unique learning styles, but with a greater awareness that AI is the one learning from their past interactions with the platform.
melissa8 5 months ago

Part 3

As a 1st-grade teacher, I can use OpenAI’s data management features to protect my students’ privacy while using AI tools like ChatGPT in the classroom. When I use ChatGPT to create lesson ideas, respond to parent messages, or phonics activities, I make sure to delete the conversation history afterward so that no sensitive or student-specific information is stored. This helps keep my students’ data safe and models good digital habits from an early age. I also explain to my students that just like we keep our personal things safe at school, we need to be careful about the information we share online. By doing this, I help them begin to understand responsible technology use as they grow. Using these tools thoughtfully allows me to enhance my teaching while prioritizing student privacy.
brigid-kennedy 6 months ago

Part 3

I explored the article “What Is Algorithmic Bias?” . I learned that AI can sometimes treat people unfairly because it learns from data that may already have bias in it. If the data has more examples from one group than another, the AI might give better results for that group. This helped me understand that I need to watch out for bias when using AI in the classroom. I will teach my students to check if AI responses seem fair to everyone. I will also remind them that AI is a tool, but it is not always right just like we teach them this with other resources. This resource will help me use AI more carefully and thoughtfully.
emily-balisteri 7 months ago

Part 3

I chose an article about an AI detection algorithm. I chose this article, because as a recent college grad, I have many memories of submitting essays and papers that I spent hours working on only to see that they were flagged as AI generated. I always thought that this was interesting because I did not use AI, but the system throught I did. I wanted to take a deeper look into this algorithm, and learned that AI looks at "text perplexity" which is how "surprised" the system is when looking at the next word. If it can guess the word easily, the perplexity is low and vice versa. This article was also very interesting because it talked about a bias that AI systems have in that several systems appear discriminatory towards non native English speakers. This was all very interesting. Being in a school that has many ESL students, I think it is important to talk about these risks that using AI can have. The children we educate need to be prepared to use AI successfully in a world where AI is likely not going away. 
ckearney 9 months ago

Provide an explanation on how one of these additional resources may help you use AI in the classroom.

I chose to explore the video “Gender Shades” under the topic of bias to learn more about how AI can be influenced by bias. It was eye opening to me in regards to addressing gender/race bias that is presented in AI.. It is a problem that affects many people in education and the workforce. The idea that AI (especially with facial recognition) does not recognize certain races or that it easily misgenders is cause for great caution.. These biases will allow direct instruction about bias in AI and help them recognize that AI does have faults; no matter how integrated it has become in all of our lives.

In the classroom use of AI, I think our impact can be greater trying to find ways to impact the data that goes into the large language models. As an elementary teacher, I have had great experiences with many of our ENL students. My colleagues and I have been working to ensure the use of immersive texts, videos, and conversations into our lessons that reflect all of our students, especially when these types of things are a large part of our everyday lives. Unfortunately, biases are a part of our world today, so it makes sense that we see them in AI as well.
msionko Over 1 year ago

Part 3

Gender Shades“The lack of diversity in training videos and data sets”. This is a problem, companies need to do better with these commercial products as misidentifying someone, when you are trusting this product, can cause critical problems for people. It can associate them with a crime that they did not commit, or say they were somewhere they were not. Races and Genders that are not identified at as a high of rate as others means we are underprepared to use them. Accountability must be taken if these products are going to be “trusted”.One positive, if you want to call it that, is that is provides a teaching moment on AI. That a trust, has to be earned, and it is NOT all encompassing as AI is not omniscient and when it is, there still needs to be a “Healthy Skepticism” as stated earlier in the badge.
dtracz Over 1 year ago

Part III

I chose AI fictions or hallucinations. I found that knowing that AI can really create an answer that sound very real even though it is riddled with fiction/ maitakes, etc. Should be kept top of mind. Have to "proof" read and verify everything that AI produces will be time consuming.  This does need to be part of the process, however, when adding AI use to your planning and instructional items. Reminding myself and the students that AI has a hard time producing FACTS because of how it functions is also a good to follow that up with an "are you sure about" to try an reevaluate any odd sounding AI responses. I like the suggestion to give your students some doctored up information and have them try to debunk it!  That sounds interesting. This is a very good idea for a lesson to have them experience this point. 
jduma Over 1 year ago

Part 3 Evidence

Explain how one of these additional resources may help you use AI in the classroom. 
I watched the video "Gender Shades" under the topic of bias to learn more about how AI can be influenced by bias. The study explained in this video was shocking to me that a tool that is being used can be so flawed.  While AI is being improved daily it is important that we keep this in mind and think of ways to combat this bias, because it can be extremely harmful to people especially people of color.  There is a local school that has been in the news over the past few years for trying to implement the use of this in their high school and many leaders, parents, and students have great concerns about it's use, which after learning more about this bias is completely warranted.

As far as classroom use goes, I think as a first grade teacher I think our impact can be greater trying to find ways to impact the data that goes into the large language models.  With our DEI initiative at SH and a personal interest of my own, we've been working to be sure to immerse texts, videos, conversation into our lessons that reflect all of our students.  When these types of things are more a part of our every day lives AI will have no choice but to have less bias because the information/ "language" they are collecting will be more representative.  We clearly have a very long way to go.  
kielebarbalate Over 1 year ago