← Back to context

Comment by daemontus

3 days ago

Maybe this is a naive question, but how are "skills" different from just adding a bunch od examples of good/bad behavior into the prompt? As far as I can tell, each skill file is a bunch of good/bad examples of something. Is the difference that the model chooses when to load a certain skill into context?

I think that's one of the key things: skills don't take up any of the model context until the model actively seeks out and uses them.

Jesse on Bluesky: https://bsky.app/profile/s.ly/post/3m2srmkergc2p

> The core of it is VERY token light. It pulls in one doc of fewer than 2k tokens. As it needs bits of the process, it runs a shell script to search for them. The long end to end chat for the planning and implementation process for that todo list app was 100k tokens.

> It uses subagents to manage token-heavy stuff, including all the actual implementation.

I think it just gives you the ability to easily do that with slash command, like using "/brainstorm database schema" or something instead of needing to define what "brainstorm" means each time you want to do it.

what you are suggesting is 1-shot, 2-shot, 5-shot etc prompting which is so effective that it's how benchmarks were presented for a while