Comment by SOLAR_FIELDS
4 hours ago
Claude can always self discover its own context. The question becomes whether it's way more efficient to have it grepping and lsing and whatever else it needs to do randomly poking around to build a half-baked context, or whether having a tailor made context injection that is dynamic can speed that up.
In other words, if you run an identical prompt, one with skill and one without, on a test task that requires discovering deeply how your codebase works, which one performs better on the following metrics, and how much better?
1. Accuracy / completion of the task
2. Wall clock time to execute the task
3. Token consumption of the task
No comments yet
Contribute on Hacker News ↗