← Back to context

Comment by petesergeant

5 days ago

> Knowing implies reasoning

That's not really clear-cut, that's simply a position you're taking. JTB could (I reckon) say that a model's "knowledge" is justified by the training process and reward functions.

> LLMs don't "know" things. These statistical models continuate text.

I don't think it's clear to anyone at this point whether or not the steps taken before token selection (eg: the journey through their dimensional knowledge space provided by attention) are close to or far from how our own thought processes work, but the description of LLMs as "simply" continuating text reduces them to their outputs. From my perspective, as someone on the other side of a text-based web-app from you, you also are an entity that simply continuates text.

You have no way of knowing whether this comment was written by a sentient entity -- with thoughts and agency -- or an LLM.