John Elliott

Part 2 - Data & Privacy

Data and Privacy

  • February 3, 2024 at 1:00 PM
  • Visible to public
In a short paragraph or two, reflect on this task:
  • How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
    • Some items you may wish to explore further:
      • FERPA
      • COPPA
  • AI Fiction and Algorithmic Bias.  Describe what this concern is and how teachers and students need to approach it.
To be perfectly frank, I am not certain exactly how to make sure that students are safe in terms of FERPA and COPPA. I think I would need to check with our district technology committee and see if we are allowed to have students work with AI Platforms. If so, which ones? If certain ones are allowed, what are the limits and guidelines? I am fully aware that some of these platforms may be primed to dig into student data, and that it is OUR responsibility as educators to protect our students data and privacy from the tools we place in their hands. 

I found the bit on AI fiction and misinformation fascinating. While watching the two women discuss pride and prejudice, at first GPT gave an incorrect response. What was most interesting and had me on the edge of my seat for a second was when they were able to prompt it again, and it came back with corrected interpretation and quotes. However, my enthusiasm fell when they continued on to point out flaws in each of the pieces of evidence GPT had gathered. From incorrect chapter, to wrong quotes, and quotes that did not support the claim, it seemed that Chat GPT was not up to this task.

Algorithmic bias seems like a challenging issue that is best addressed by having diverse human populations creating the algorithms that are used. A homogeneous group of coders will end up creating biased algorithms however hard they try because no small group of homogeneous people can possibly imagine all the points of view and experience that could come to bear. In fact, no group, except the entirety of humanity, could completely eliminate that problem. Knowing both these extremes, we should strive to do the best we can. Recognize there will always be gaps, so once algorithms are published and activated, we must continue to review them for biases that were missed initially, and then institute solutions, as well as brainstorm future solutions, promoting members who were able to overcome the challegenges and pitfalls that already happened.