Comment by createaccount99
4 days ago
Anti-feature if you ask me. An agent should be able to pick the stuff it needs from the AGENTS.md, and not blindly use everything.
4 days ago
Anti-feature if you ask me. An agent should be able to pick the stuff it needs from the AGENTS.md, and not blindly use everything.
Everything the agent has to read to pick out stuff costs $.
Context is not infinite. Saving context for what matters is key in working with LLMs.
Context is not infinite yet.
New standard for something that maybe false very soon is just a bad idea.
We all want to move to local models eventually for privacy and reliability.
They don't (and won't) have infinite context without trickery or massive €€€ use.
The current crop of online LLMs are just running on VC money slightly tapered with subscriptions - but still at a loss. The hype and money will run out, so use them as much as possible now. But also keep your workflows so that they will work locally when the time comes.
Don't be that 10x coder who becomes a 0.1x coder when Anthropic has issues on their side =)
1 reply →
Until LLMs deal with context as a graph and not just a linear order of vectors, it won't matter the amount of context you shove down it's processors, it's always going to suffer from near-sighted processing of the last bits. To generate true intelligence it needs to be able to jump to specific locations without the interceding vectors affecting it's route.