← Back to context

Comment by shaky-carrousel

5 days ago

They absolutely do. They know more in English than in Spanish, I've seen that on all models, since the beginning.

They have more data in English than Spanish. LLMs don't know or reason or follow instructions. They merely render text continuations that are coherent with the expectations you set when prompting. The fact that they are not able to sustain the illusion in languages with less available training data than English should make that clear.

  • > They have more data in English than Spanish.

    Yep, that there seems like the definition of knowing. Don't worry, your humanity isn't at risk.

    • No, mental models matter. This has nothing to do with AGI doomerism.

      Knowing implies reasoning. LLMs don't "know" things. These statistical models continuate text. Having a mental model that they "know" things, that they can "reason" or "follow instructions" is driving all sorts of poor decisions.

      Software has an abstraction fetish. So much of the material available for learners is riddled with analogies and "you don't need to know that" attitude. That is counter productive and I think having accurate mental models matters.

      2 replies →