Comment by krackers
18 hours ago
I'm surprised too, considering that in https://x.com/karpathy/status/1977758204139331904 he mentioned regarding his NanoChat repo
>Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.
And a lot of the tooling he mentioned in OP seems like self-imposed unnecessarily complexity/churn. For the longest time you could say the same about frontend, that you're so behind if you're not adopting {tailwind, react, nodejs, angular, svelte, vue}.
At the end of the day, for the things that an LLM does well, you can achieve roughly the same quality of results by "manually" pasting in relevant code context and asking your question. In cases where this doesn't work, I'm not convinced that wrapping it in an agentic harness will give you that much better results.
Most bespoke agent harnesses are obsoleted by the time of the next model release anyway, the two paradigms that seem to reliably work are "manual" LLM invocation and LLM with access to CLI.
No comments yet
Contribute on Hacker News ↗