I read the article Meta warns its new chatbot may forget that it's a bot Meta has released BlenderBot for all to test - but maybe don't believe everything it says. Users of AI need to be aware that Chat bots state misinformation or hallucinations. They are designed to predict the next word however there is some rate at which the model does it inaccurately. They stated that they won't be easily fixed. They won't be perfect but it will get better.
When using AI in the classroom you can demonstrate to students how AI can present inaccurate information and how to make sure that you are checking your resources to make sure that information you are presenting is accurate. The user can prompt and ask "Are you sure about___", Students can practice digital literacy. We can instruct students how to use search engines and large language models to complement strengths and weaknesses. Even in our day to day lives there seems to be so much "fake news" you often wonder what is true and what is fake. It requires us to do our own research and find out what is real and what is fake.
When using AI in the classroom you can demonstrate to students how AI can present inaccurate information and how to make sure that you are checking your resources to make sure that information you are presenting is accurate. The user can prompt and ask "Are you sure about___", Students can practice digital literacy. We can instruct students how to use search engines and large language models to complement strengths and weaknesses. Even in our day to day lives there seems to be so much "fake news" you often wonder what is true and what is fake. It requires us to do our own research and find out what is real and what is fake.


