LLMs (AI Large Language Models) tend to project a high degree of confidence even when they are completely wrong. As a science teacher, that worries me because I want my students to learn how to think critically rather than accept explanation authoritatively. I’ve started using AI in class as a way to model how to interface with its different tools. Sometimes I’d show them an AI-generated answer and ask, “Does this actually make sense?” It’s can offer an engaging way to get them thinking both about the question itself and how answers are generating by these different tools.


