Comment by bob1029
1 year ago
> So it's also possible that the prompt loop has no special sauce and that the capabilities here do come mostly from the model itself.
The prompt loop code often encodes intelligence/information that the human developers tend to ignore during their evaluations of the solution. For example, if you add a filter for invalid json and repeatedly invoke the model until good json comes out, you are now carrying water for the LLM. The additional capabilities came from a manual coding exercise and additional money spent on a brute force search.
No comments yet
Contribute on Hacker News ↗