Brent Peterson

Part 3 Beyond the Episode

AI Fiction

  • January 18, 2024 at 8:54 AM
  • Visible to public
I focused on AI fiction and how through read prompting you can oftentimes get the chatbot to recognize its error and come back with an accurate response.

I read an article from PBS in the resources that interviewed a number of prominent AI leaders and got their thoughts on hallucinations when using AI tools. The general consensus seems to be that these hallucinations are not "fixable " because in the end these large language models are predictive, and there will necessarily be some level of  inaccuracy due to the computations, the program is doing to figure out the next word to type, they can get better.  It will simply take time for them to improve.  Sam Altman of Chat GPT even said, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth."  If he is skeptical then we obviously need to be as well.

For my test prompt I asked chat GPT to explain how the character Thor in the marvel, cinematic universe became unworthy and was not able to wield his hammer Mjolnir. GPT's response was an accurate because they used the wrong movie to explain when and how he became unworthy and when I re-prompted it by asking if it was sure about it to answer, and provided more context around the correct answer, it did come back with the correct response. It even apologized to me for the inaccuracy which I thought was quite nice.  

These types of examples while using topics that you know very well are excellent tools to highlight the hallucination and miss information issue that AI models create. Using these with students would be a useful exercise in showing how this works.