Comment by cjonas
10 hours ago
Implementation Notes:
- There is no reason you have to expose the skills through the file system. Just as easy to add tool-call to load a skill. Just put a skill ID in the instruction metadata. Or have a `discover_skills` tool if you want to keep skills out of the instructions all together.
- Another variation is to put a "skills selector" inference in front of your agent invocation. This inference would receive the current inquiry/transcript + the skills metadata and return a list of potentially relevant skills. Same concept as a tool selection, this can save context bandwidth when there are a large number of skills
> Or have a `discover_skills` tool
Yes, treating the "front matter" of skill as "function definition" of tool calls as kind of an equivalence class.
This understanding helped me create an LLM agnostic (also sandboxed) open-skills[1] way before this standardization was proposed.
1. Open-skills: https://github.com/instavm/open-skills