← Back to context

Comment by otabdeveloper4

11 hours ago

LLMs don't think at all.

Forcing it to be concise doesn't work because it wasn't trained on token strings that short.

> Forcing it to be concise doesn't work because it wasn't trained on token strings that short.

This is a 2023-era comment and is incorrect.

  • LLMs architectures have not changed at all since 2023.

    > but mmuh latest SOTA from CloudCorp (c)!

    You don't know how these things work and all you have to go on is marketing copy.

    • Yea you don't know anything about LLM architectures. They often change with each model release.

      You also aren't aware that there's more to it than "LLM architecture". And you're rather confident despite your lack of knowledge.

      You're like the old LLMs before ChatGPT was released that were kinda neat, but usually wrong and overconfident about it.

They’re able to solve complex, unstructured problems independently. They can express themselves in every major human language fluently. Sure, they don’t actually have a brain like we do, but they emulate it pretty well. What’s your definition of thinking?