← Back to context

Comment by isodev

2 days ago

The usage of the word "reasoning" in the context of LLMs, just like the "I" in "AI", that's more marketing than a technical reality. I know it can be confusing.

Regardless of semanthics, LLMs + tooling can do impressive things.

For example I can tell LLMs to scan my database schema and compare to code to detect drift or inconsistencies.

And while doing that it has enough condensed world knowledge to point to me that the code is probably right when declaring person.name a non-nullable string despite the database column being nullable.

And it can infer that date_of_birth column is correct in being nullable on the database schema and wrong in code where the type is a non-nullable date because, in my system, it knows date_of_birth is an optional field.

This is a simple example that can be solved by non-LLMs tooling also. In practice it can do much more advanced reasoning with regards to business rules.

We can argue semanthics all day but this is reason enough to be useful for me.

There are many examples I could give. But to the skeptics I recommend trying to use LLMs for understanding large systems. But take your time to give it read only access to data base schema.