Large Language Models (LLMs) look like they comb the internet for all language to create gargantuan data sets they can then use as sources of language probabilities. So, they look at series of words, and find the most likely word to come next, based on what they have so far. I imagine the prompt also is taken into effect, in some fashion. While it seems slightly mundane, the implications and ramifications are huge and we have not felt them all.
This is Predictive AI (I think). It is using statistics and probabilities drawn from all internet data to predict what comes next. While new content is being created, it is not truly "new," but more a statistical amalgamation of all that came before.
Generative AI looks to go further. It looks to create new content more like a human brain works. This is the model that Google is "moon-shooting" for. We will see if they can get there!
This is Predictive AI (I think). It is using statistics and probabilities drawn from all internet data to predict what comes next. While new content is being created, it is not truly "new," but more a statistical amalgamation of all that came before.
Generative AI looks to go further. It looks to create new content more like a human brain works. This is the model that Google is "moon-shooting" for. We will see if they can get there!


