Comment by lxgr
15 hours ago
Yes, pretty much.
LLM-powered agents are surprisingly human-like in their errors and misconceptions about less-than-ubiquitous or new tools. Skills are basically just small how-to files, sometimes combined with usage examples, helper scripts etc.
No comments yet
Contribute on Hacker News ↗