Comment by theshrike79
10 hours ago
But as far as I know, that was way before tool calling was a thing.
I'm more bullish about small and medium sized models + efficient tool calling than I'm about LLMs too large to be run at home without $20k of hardware.
The model doesn't need to have the full knowledge of everything built into it when it has the toolset to fetch, cache and read any information available.
No comments yet
Contribute on Hacker News ↗