Comment by theshrike79

4 days ago

Context is not infinite. Saving context for what matters is key in working with LLMs.

Context is not infinite yet.

New standard for something that maybe false very soon is just a bad idea.

  • We all want to move to local models eventually for privacy and reliability.

    They don't (and won't) have infinite context without trickery or massive €€€ use.

    The current crop of online LLMs are just running on VC money slightly tapered with subscriptions - but still at a loss. The hype and money will run out, so use them as much as possible now. But also keep your workflows so that they will work locally when the time comes.

    Don't be that 10x coder who becomes a 0.1x coder when Anthropic has issues on their side =)

    • I don't see how anyone could make a successful product build on cloud LLMs, even if you get a perfect workflow, you'll either be gouged with price rises, or lose out to model changes and context/prompt divergence. All this "prompt" nonsense is simply trying to play to the LLM audience, and no amount of imprecise prompt will negate the fundamental instability.

      So yeah, you have to use a localLLM if you think there's a viable product to be had. Anyone whose been programming knows that once you get to the mile mark of a complete & finished project, it can be mothballed for decades generating utility and requiring limited maintenance. All that goes out the window if you require a cloud provider to remain stable for a decade.

  • Until LLMs deal with context as a graph and not just a linear order of vectors, it won't matter the amount of context you shove down it's processors, it's always going to suffer from near-sighted processing of the last bits. To generate true intelligence it needs to be able to jump to specific locations without the interceding vectors affecting it's route.