Comment by wg0

4 hours ago

> How claud code works in large codebases?

Simple - It even eats up to 35% five hour usage limit in first prompt even on small projects and then there's 5 minutes time out for you to respond quickly or caches would go bust and you'll pay another 12% to 15% on the next prompt.

The article listed explains how to avoid this. If you naively turn it loose on a big code base, yes, you’ll burn a lot of tokens while it tries to find stuff.

  • If I set a regular expression as watcher on a filesytem to notify me if any file changes and I write that in go and assuming regular expression isn't buggy nor its implementation - and then I write rules in a file (as regex) then there's snowball in hell of a chance that it would misnotify or miscategorize anything.

    Are LLMs that super reliable in their output already with all the guardrails around?

    Don't think so. Hence it is snake oil just like dozens of harnesses.

    It might behave differently than specified and a human is required to validate every output carefully or else.

    • > Are LLMs that super reliable in their output already with all the guardrails around?

      Well, what is your definition of "super reliable in the output", and is it a quantifiable/measurable target or just a feeling?

      Is it "more than humans", "more than senior developers", "almost perfect", "perfect"?

      > It might behave differently than specified and a human is required to validate every output carefully or else.

      Sure, just like meatbag developers. All the security flaws AI finds today were introduced years/decades ago by humans and haven't been found (that we know) by humans in ages.

      2 replies →

  • This is such a shame, finding where stuff is in a large codebase is my number 1 use for LLM. I hate it that it relies on grep so much, I can do grep better and faster myself.