← Back to context

Comment by beautifulfreak

9 hours ago

Language Models are Injective and Hence Invertible https://arxiv.org/abs/2510.15511

That paper is about retrieving the input (prompt from user) based on the hidden-layer activations of a trained LLM, since their mappings are 1-to-1. I don't think it makes any claims about training data, certainly not about being able to retrieve it losslessly from a model.

I don't believe they are injective but if they are, they are not capable of (correct) thought.

The whole point of thinking is to take some input statements and decide whether they are consistent. Or, project them onto a close but consistent set of statements. (Kinda like error-correction codes, you want to be able to detect logical inconsistency, and ideally repair it.)

But that implies the set of consistent staments is a subset.

The set of non-invertible answers is of measure 0 (that is the claim). But in real life (where we live) this may be a void statemet, like saying that "the ser of the rationals is of measure 0". Right, that is true. It is also useless.