← Back to context

Comment by dnautics

7 hours ago

> I think the more you can shift to compile time the better when it comes to agents

not born out by evidence. rust is bottom-mid tier on autocoderbenchmark. typescript is marginally bettee than js

shifting to compile time is not necessarily great, because the llm has to vibe its way through code in situ. if you have to have a compiler check your code it's already too late, and the llm does not havs your codebase in its weights, a fetch to read the types of your functions is context expensive since it's nonlocal.

> if you have to have a compiler check your code it's already too late

If you're running good agentic AI it can read the compile errors just like a human and work to fix them until the build goes through.

  • Which is slow and heavy in Rust. All languages have that but faster (and simpler due to no lifetimes).

    • cargo check is fast. It's only slow when the build goes through (barring extreme use of compile-time proc macros, which is rare and crate-specific).

    • i mean as a first order approximation context (the key resource that seems to affect quality) doesn't depend on real compilation speed, presumably the agent is suspended and not burning context while waiting for compliation

  • how about not making the error in the first place

    • If you have an LLM that doesn't make errors ever, then you have an ASI, at which point the conversation is meaningless. In the meantime, having a lower error rate but more uncaught errors is less important than making incorrect code impossible to compile, and/or flagged by strict linters.