Comment by withinboredom
2 years ago
You’d also get text that has very little to do with the training material from any statistical model. A prediction may have nothing to do with the past and it might be wrong or right. For example, the weather forecast said it will rain all day, but the sun is up and bright in the sky without a cloud in sight.
The model knows what a limerick is, from the source material. It knows what your criteria is from the source material. It can predict what someone would say given that prompt.
Humans also do this. I’m usually one or two words ahead of the person I’m speaking to, sometimes even entire paragraphs ahead if I’m paying full attention. My dreams give me unrealistic situations to explore new ways of dealing with them. When I write code, I have a pretty good idea of what I’m going to write before I write it.
The main difference between a human and an LLM, is that a human has no limit. A human will still continue when overwhelmed with data, usually by shedding unimportant data. An LLM will just tell you it’s too much data. Smart humans won’t just shed the data, but “mark” it mentally as potentially important in the future and come back to it once a deeper understanding is achieved.
There are other, smaller differences as well, but that is the biggest, most annoying one, so far.
No comments yet
Contribute on Hacker News ↗