Comment by paulryanrogers
5 days ago
IME traditional rules based systems don't try to solve free form problems. So they stop at the point the inputs can't be handled (any further) whereas LLM could continue, albeit without any guarantee the result is accurate. It could completely hallucinate a fictional solution, which I've seen too often to trust them.
No comments yet
Contribute on Hacker News ↗