After reading the article about Google's Bard error at launch, I am reminded that even advanced systems make mistakes or hallucinate facts. That helps me as an educator by reinforcing the idea that AI-generated answers should always be double-checked before being used in lessons or assessments. This is valuable for myself if I use AI to create class notes or assignments as well as my students who may use it to study or complete assignments. In class, I might use the Bard mistake as a case study showing students the AI’s incorrect answer and asking them to find and correct the error using reliable sources. This gives them a concrete example of AI’s fallibility and strengthens their critical thinking around trusting AI outputs. I could find a few other examples as well so they could have repeated practice.


