To keep students safe when using AI, I think the biggest thing is awareness. Tools like ChatGPT are great, but they come with risks, especially around data privacy and misinformation or hallucinating. Knowing about laws like FERPA and COPPA helps remind us that student info shouldn’t be shared or stored without care. I also talk with students about how AI sometimes “makes things up” or reflects human bias, since it learns from data created by people.
Algorirthm bias is a big one! When the system’s results lean one way because of the data it was trained on, it can leave out voices or repeat stereotypes without realizing it. I try to encourage my students to question what they see, double-check facts, and remember that AI isn’t always right! The goal isn’t to avoid AI, but to use it wisely and stay mindful of where the information comes from.
Algorirthm bias is a big one! When the system’s results lean one way because of the data it was trained on, it can leave out voices or repeat stereotypes without realizing it. I try to encourage my students to question what they see, double-check facts, and remember that AI isn’t always right! The goal isn’t to avoid AI, but to use it wisely and stay mindful of where the information comes from.


