Comment by burnte

1 day ago

> It really makes me believe that the models do not really understand the topic, even the basics but just try to predict the text.

This is correct. There is no understanding, there aren't even concepts. It's just math, it's what we've been doing with words in computers for decades, just faster and faster. They're super useful in some areas, but they're not smart, they don't think.

I’ve never seen so much misinformation trotted out by the laity as I have with LLMs. It’s like I’m in a 19th century forum with people earnestly arguing that cameras can steal your soul. These people haven’t a clue of the mechanism.