In a short paragraph or two, reflect on this task:
I found the bit on AI fiction and misinformation fascinating. While watching the two women discuss pride and prejudice, at first GPT gave an incorrect response. What was most interesting and had me on the edge of my seat for a second was when they were able to prompt it again, and it came back with corrected interpretation and quotes. However, my enthusiasm fell when they continued on to point out flaws in each of the pieces of evidence GPT had gathered. From incorrect chapter, to wrong quotes, and quotes that did not support the claim, it seemed that Chat GPT was not up to this task.
Algorithmic bias seems like a challenging issue that is best addressed by having diverse human populations creating the algorithms that are used. A homogeneous group of coders will end up creating biased algorithms however hard they try because no small group of homogeneous people can possibly imagine all the points of view and experience that could come to bear. In fact, no group, except the entirety of humanity, could completely eliminate that problem. Knowing both these extremes, we should strive to do the best we can. Recognize there will always be gaps, so once algorithms are published and activated, we must continue to review them for biases that were missed initially, and then institute solutions, as well as brainstorm future solutions, promoting members who were able to overcome the challegenges and pitfalls that already happened.
- How can you mitigate risks for your students (data privacy, misinformation, algorithmic bias)
- Some items you may wish to explore further:
- FERPA
- COPPA
- AI Fiction and Algorithmic Bias. Describe what this concern is and how teachers and students need to approach it.
I found the bit on AI fiction and misinformation fascinating. While watching the two women discuss pride and prejudice, at first GPT gave an incorrect response. What was most interesting and had me on the edge of my seat for a second was when they were able to prompt it again, and it came back with corrected interpretation and quotes. However, my enthusiasm fell when they continued on to point out flaws in each of the pieces of evidence GPT had gathered. From incorrect chapter, to wrong quotes, and quotes that did not support the claim, it seemed that Chat GPT was not up to this task.
Algorithmic bias seems like a challenging issue that is best addressed by having diverse human populations creating the algorithms that are used. A homogeneous group of coders will end up creating biased algorithms however hard they try because no small group of homogeneous people can possibly imagine all the points of view and experience that could come to bear. In fact, no group, except the entirety of humanity, could completely eliminate that problem. Knowing both these extremes, we should strive to do the best we can. Recognize there will always be gaps, so once algorithms are published and activated, we must continue to review them for biases that were missed initially, and then institute solutions, as well as brainstorm future solutions, promoting members who were able to overcome the challegenges and pitfalls that already happened.


