← Back to context

Comment by lomase

3 days ago

Neither LLM can reason about anything.

Reasoning is an human trait.

Whatever you say lol.

  • The usage of the word "reasoning" in the context of LLMs, just like the "I" in "AI", that's more marketing than a technical reality. I know it can be confusing.

    • Regardless of semanthics, LLMs + tooling can do impressive things.

      For example I can tell LLMs to scan my database schema and compare to code to detect drift or inconsistencies.

      And while doing that it has enough condensed world knowledge to point to me that the code is probably right when declaring person.name a non-nullable string despite the database column being nullable.

      And it can infer that date_of_birth column is correct in being nullable on the database schema and wrong in code where the type is a non-nullable date because, in my system, it knows date_of_birth is an optional field.

      This is a simple example that can be solved by non-LLMs tooling also. In practice it can do much more advanced reasoning with regards to business rules.

      We can argue semanthics all day but this is reason enough to be useful for me.

      There are many examples I could give. But to the skeptics I recommend trying to use LLMs for understanding large systems. But take your time to give it read only access to data base schema.

> Neither LLM can reason about anything.

> Reasoning is an human trait.

Note: this is not directed at the commenter or any person in particular. It is directed at various patterns I've noticed.

I often notice claims like the following:

- human intelligence is the "truest" form of intelligence;

- machines can't reason (without very clearly stating what you mean by reasoning);

- [such and such] can only be done by a human (without clearly stating that you mean at the present time with present technology that you know of);

Such claims are in my view, rather unhelpful framings – or worse, tropes or thought-terminated clichés. We would be wise to ask ourselves how such things persist.

How do these ideas lodge in our brains? There are various shaky premises (including cognitive missteps) that lead to them. So I want to make some general comments that often lead to the above kind of thinking.

It is more important than ever for people to grow their understanding and appreciation. I suggest considering the following.

1. ... recognize that one probably can't offer a definition of {reasoning, intelligence, &c} that is widely agreed upon. Probably the best you can hope for is to clarify the sense of which you mean. There are often fairly clear 'camps' that can easily be referenced.

2. Recognition that implicitly hiding a definition in your claims -- or worse, forcing a definition on people -- doesn't do much good.

3. Awareness that one's language may be often interpreted in various ways by reasonable people.

4. Internalize dictionaries are catalogs of various usage that evolve over time. Dictionaries are not intended to be commandments of correctness, though some still think dictionary-as-bludgeon is somehow appropriate.

3. Acknowledge confusing terminology in AI/LLM in particular. For example, reasonable people can recognize that "reasoning" in this context is a fraught term.

5. Recognition that humanity is only getting started when it comes to making sense of how "intelligence" decomposes, how our brains work, the many nuanced differences between machine intelligence and human intelligence.

6. Recognize one's participation in a social context. Strive to not provide fuel for the fires of misunderstanding. If you use a fraught term, be extra careful to say what you mean.

7. Hopefully obvious: sweeping generalizations and blanket black-or-white statements are unlikely to be true unless you are talking about formal systems like logic and mathematics. Just don't do it. Don't let your thinking fall into that trap. And don't spew it -- that insults the intelligence of one's audience.

8. Generally speaking, people would be wise† to think about upping their epistemic game. If one says things that are obviously inaccurate, you are wasting your intelligence refined over millions of years by evolution and culture. To do so is self-destructive, for it makes oneself less valuable relative to LLMs who (although they blunder) are often more reliable than people who speak carelessly.

† Because it benefits the person directly and it helps culture, civilization, progress, &c