Comment by closewith
4 months ago
LLMs can't, but agents can. They can read documentation into context, verify code, compile, use analysis tools, and run tests.
Hallucinations do occur, but they're becoming more rare (especially if you prompt to the strengths of the model and provide context) and tests catch them.
No comments yet
Contribute on Hacker News ↗