← Back to context

Comment by jmathai

8 hours ago

Skills have frontmatter which includes a name and description. The description is what determines if the llm finds the skill useful for the task at hand.

If your agent isn’t being used, it’s not as simple as “agents aren’t getting called”. You have to figure out how to get the agent invoked.

Sure, but then you're playing a very annoying and boring game of model-whispering to specific versions of models that are ever changing as well as trying to hopefully get it to respond correctly with who knows what user input surrounds it.

I really only think the game is worth playing when it's against a fixed version of a specific model. The amount of variance we observe between different releases of the same model is enough to require us to update our prompts and re-test. I don't envy anyone who has to try and find some median text that performs okay on every model.

  • About a year ago I made an ChatGPT and Claude based hobo RAG-alike solution for exploring legal cases, using document creation and LLMs to craft a rich context window for interrogation in the chat.

    Just maintaining a basic interaction framework, consistent behaviours in chat when starting up, was a daily whack-a-mole where well-tested behaviours shift and alter without rhyme or reason. “Model whispering” is right. Subjectively it felt like I could feel Anthropic/OpenAI engineers twiddling dials on the other side.

    Writing code that executes the same every time has some minor benefits.