Comment by tqi
10 days ago
> It’s impossible to reason about and debug why the LLM made a given decision, which means it’s very hard to change how it makes those decisions if you need to tweak them... The LLM is good at figuring out what the hell the user is trying to do and routing it to the right part of your system.
I'm not sure how to reconcile these two statements. Seems to me the former makes the latter moot?
No comments yet
Contribute on Hacker News ↗