← Back to context

Comment by dbbk

4 months ago

Right, LLMs don't understand TS, because they're not integrated with it. When they come across something they don't know, they just start hallucinating, and don't even verify if it's actually valid (because they can't)

LLMs can't, but agents can. They can read documentation into context, verify code, compile, use analysis tools, and run tests.

Hallucinations do occur, but they're becoming more rare (especially if you prompt to the strengths of the model and provide context) and tests catch them.