← Back to context

Comment by nurettin

9 days ago

Sometimes seeing something that resembles reasoning doesn't really make it reasoning.

What makes it "seem to get better" and what keeps throwing people like lecun off is the training bias, the prompts, the tooling and the billions spent cherry picking information to train on.

What LLMs do best is language generation which leads to, but is not intelligence. If you want someone who was right all along, maybe try Wittgenstein.