A.I.101 Part #4: Ensuring a Responsible Approach

Part 3 Beyond the Episode

Only editable by group admins

  • Last updated December 18, 2023 at 12:30 PM by sweethometc
  • Evidence visible to public
Use resources provided to extend your learning!

In this Task…

Explore the Resource Link below to review various areas that might interest you:
https://docs.google.com/document/d/1m8kmhdd3BbWhRD48BiEPYjiU61JJy-V80bfge0NTFVg/copy

Your Choice:
  1. Pick ONE area highlighted in the Resource Link (listed below 1-3).  There are a variety of bulleted articles and materials to explore listed at the end of each section.    
    1. Data Privacy (any of the four bulleted items)
    2. AI Fictions (any of the six bulleted items)
    3. Bias (any of the five bulleted items)
  2. After you explore one (1) article or video from the bulleted list (Data Privacy, OR AI Fictions, OR Bias).  Consider what you learn and how it might shape your use of AI in the classroom.

Evidence of Learning...

In a short paragraph or two, reflect on this task:
  • Explain how one of these additional resources may help you use AI in the classroom. 

All posted evidence

Evidence Part 3

Gender shades was absolutely fascinating in addressing gender/race bias that is presented in AI facial recognition. It is a problem that affects many people in education and the workforce. The idea that facial recognition may not recognize certain races or that it easily misgenders is cause for great concern. Knowing about these bias, I believe I can showcase them to students so that they question AI and recognize that it does have faults; no matter how integrated it has become in all of our lives. 
kelly-gravel Over 1 year ago

Part 3- Beyond the Episode...

1. I choose to explore the topic of bias even further. As a social studies teacher, we definitely come across the word and use of bias quite often in our curriculum, daily/weekly lessons, and when preparing for the Regents Exam. In our new formatted exam, the Short Essay Question or SEQ explores the reliability of the documents in question. They ask the students to analyze one of the documents (Document #1 or Document #2) and to explain how the audience, or purpose, or BIAS, or POV affects the document's use as a reliable source of evidence. It's a great tool and skill for the students to have, to be able to read and analyze a document, and to be able to explain how the author's bias affects the reliability of the document (does it make it reliable or unreliable)... So hopefully teaching this, along with the Algorithm Bias that we see online with YouTube's recommendations (which drive 70% of what we watch) and Political Bias in ChatGPT, we can continue to better educate our students to be able to spot these potential biases in their work and research both in school and at home. Once again, continuing to teach and encourage digital citizenship and literacy skills to our students in our social studies classrooms.
martjd28 Over 1 year ago

Data Privacy

I think it's really important to educate our students so that they have informed choice as much as possible.  It's also very important to teach students about tracking and how they can manage their settings, even as adults we should be learning about these data privacy issues as our technology gets more and more advanced.
bonnie-lorentz Over 1 year ago

evidence

I read the article Meta warns its new chatbot may forget that it's a bot  Meta has released BlenderBot for all to test - but maybe don't believe everything it says.   Users of AI need to be aware that Chat bots state misinformation or hallucinations.  They are designed to predict the next word however there is some rate at which the model does it inaccurately.  They stated that they won't be easily fixed.  They won't be perfect but it will get better.  
When using AI in the classroom you can demonstrate to students how AI can present inaccurate information and how to make sure that you are checking your resources to make sure that information you are presenting is accurate. The user can prompt and ask "Are you sure about___", Students can practice digital literacy.  We can instruct students how to use search engines and large language models to complement strengths and weaknesses.   Even in our day to day lives there seems to be so much "fake news" you often wonder what is true and what is fake.  It requires us to do our own research and find out what is real and what is fake.
lwargo Over 1 year ago

The pitfalls of AI... as an AI teaching tool


I chose the first article under AI fictions that documented the recent incident involving Manhattan lawyers Steven Schwartz and Peter LoDuca, who were fined $5,000 for submitting a court brief generated by ChatGPT containing fabricated information. This incident underscores the critical importance of responsible and ethical use of artificial intelligence in various professional fields. While this case exposes the potential pitfalls of relying solely on AI-generated content, it also serves as a valuable teaching moment in the classroom. Educators can leverage stories like these to emphasize the necessity of maintaining a thorough understanding of the tools at one's disposal, particularly when integrating AI into legal practice or other professional domains. By highlighting the consequences of blindly trusting AI-generated information, students can develop a heightened awareness of the ethical considerations and potential risks associated with AI applications in their future careers.

This incident also offers an opportunity to explore the importance of accountability and transparency in AI use. In the classroom, educators can engage students in discussions on how to verify and cross-reference AI-generated content to ensure its accuracy and reliability. Encouraging a critical approach to technology and fostering a culture of responsible AI utilization will empower students to navigate the evolving landscape of artificial intelligence in a conscientious manner, preventing similar ethical lapses in their own professional endeavors. Tying back to the discussion on Pride and Prejudice, it is important to recognize that content generated by LLMs needs to be looked over carefully and should be viewed as a 
lattice, not a final form.
john-elliott Almost 2 years ago

Liberta- Part III

For this task I chose to examine AI fictions and I was drawn to an article on how fake articles were coming out and being published in The Guardian.  The article talked about how the editing team at this publication sought out finding the falsified articles and how the newest concern that surfaced through this was not just the use of ChatGPT to write it, but to also completely fabricate the courses.  The entire citation process is now jeopardized as well.  This is very concerning for journalists who base their careers on credibility and presenting factual information to the public.  It likewise is concerning for myself as an educator as I see the use of these AI tools more and more for students to cheat and generate writing assignments rather than do the work/research themselves.  They are not only generating their writing pieces, but they most certainly are not checking the work for “facts” and if sources are real.  This is a black hole of problems that is widening day by day.  In another provided article they also call this fabrication by AI as “AI Hallucination”.  It appears Chat GPT, Bard, and others are trying to create additional tech tools to combat this, but nothing is definitive yet.  This article discusses how new chatbots and tech are being created as tools for news sources and educators to fight the falsification of AI.  According to the article over 4.4 trillion dollars have already been invested to create AI challenge tools to validate sources and reliability.  More combat tools will be useful in the fight against inappropriate use of AI for educators and other careers in the future.  In the mean time they encourage teachers and sources to use AI detectors to not only test content but now citation sources as well.  The laundry list of concerns continues….
dliberta Almost 2 years ago

AI Fiction

I focused on AI fiction and how through read prompting you can oftentimes get the chatbot to recognize its error and come back with an accurate response.

I read an article from PBS in the resources that interviewed a number of prominent AI leaders and got their thoughts on hallucinations when using AI tools. The general consensus seems to be that these hallucinations are not "fixable " because in the end these large language models are predictive, and there will necessarily be some level of  inaccuracy due to the computations, the program is doing to figure out the next word to type, they can get better.  It will simply take time for them to improve.  Sam Altman of Chat GPT even said, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth."  If he is skeptical then we obviously need to be as well.

For my test prompt I asked chat GPT to explain how the character Thor in the marvel, cinematic universe became unworthy and was not able to wield his hammer Mjolnir. GPT's response was an accurate because they used the wrong movie to explain when and how he became unworthy and when I re-prompted it by asking if it was sure about it to answer, and provided more context around the correct answer, it did come back with the correct response. It even apologized to me for the inaccuracy which I thought was quite nice.  

These types of examples while using topics that you know very well are excellent tools to highlight the hallucination and miss information issue that AI models create. Using these with students would be a useful exercise in showing how this works.
brent-peterson Almost 2 years ago

Part 3

I decided to focus on Bias for this task.  As a social studies teacher, this topic can be discussed amongst the various time periods of history where were study injustice, inequality and conflict.  Historical examples of biases could be used as a spring board to teach about the current enduring issue of racial bias in our data systems.  
I was really intrigued with the "Gender Shades" video on Algorithmic Bias and the video on Political Bias in Chat GPT.  These forms of algorithmic bias highlight larger issues that are occurring in our nation- racism, Gender Inequality and inequity of resources. In my classroom, I hope use these provided resources to teach digital literacy skills.  I would try to encourage students to spot bias when apparent and use a variety of tools to eliminate this as much as possible.
cutzig Almost 2 years ago