← Back to context

Comment by FrustratedMonky

2 years ago

Why do we think that inside the 'weights' there is not a model? Where in the brain can you point and say 'there is the model'. The wiggly mass of neurons creates models and symbols, why do we assume that inside large neural nets the same thing isn't happening. When I see pictures of both (brain scan versus weights), they look pretty similar. Sorry I don't have latest citation, but was under assumption that the biggest breakthroughs in AI were around symbolic logic.

As I said, the model is vague at best. Regardless of how the information is stored, a child knows that a ball is a thing with tangible behaviors, not just a word that often appears with certain other words. A child knows what truth is, and LLMs rather notoriously do not. An older adult knows that a citation must not only satisfy a form but also relate to something that exists in the real world. An LLM is helpless with material not part of its training set. Try getting one to review a draft of a not-yet-published paper or book, and you'll get obvious garbage back. Any human with an equivalent dollar value in training can do better. A human can enunciate their model, and make predictions, and adjust the model in recognizable ways without a full-brain reset. An LLM can do none of these things. The differences are legion.

LLMs are not just generalists, but dilettantes to a degree we'd find extremely tiresome in a human. So of course half the HN commentariat loves them. It's a story more to do with Pygmalion or Narcissus than Prometheus ... and BTW good luck getting Chad or Brad to understand that metaphor.