Comment by throwatdem12311
14 days ago
I know it seems counter-intuitive but are there any agent harnesses that aren’t written with AI? All these half a million LoC codebases seem insane to me when I run my business on a full-stack web application that’s like 50k lines of code and my MvP was like 10k. These are just TUIs that call a model endpoint with some shell-out commands. These things have only been around in time measured in months, half a million LoC is crazy to me.
Check out pi coding agent, https://pi.dev
https://github.com/badlogic/pi-mono/blob/main/AGENTS.md
Who cares about LoC? Its a metric that hasn't mattered since we measured productivity in it in the 1980s. For all we know they made these design choices so they could more easily reuse the code in other codebases. Ideally you'd build the library to do that at the same time, but this is start up time constraints to repay loans and shit.
Bugs and vulnerabilities are roughly linear to lines of code in a project.
"Who cares how much concrete we used in this bridge?"
That would be a sensible comparison if concrete was free
1 reply →
Opencode actually has a pretty solid codebase quality wise. I have done brief pokes and its been largely fine.
> just TUIs
For starters, CC's TUI is React-based.
Somebody somewhere is bragging to someone about using React to render a grid of ASCII characters.
https://x.com/trq212/status/2014051501786931427
" Most people's mental model of Claude Code is that "it's just a TUI" but it should really be closer to "a small game engine".
For each frame our pipeline constructs a scene graph with React then -> layouts elements -> rasterizes them to a 2d screen -> diffs that against the previous screen -> finally uses the diff to generate ANSI sequences to draw
We have a ~16ms frame budget so we have roughly ~5ms to go from the React scene graph to ANSI written. "
4 replies →
[dead]
> These are just TUIs that call a model endpoint with some shell-out commands.
Claude Code CLI is actually horrible: it's a full headless browser rendering that's then converted in real-time to text to show in the terminal. And that fact leaks to the user: when the model outputs ASCII, the converter shall happily convert it to Unicode (no latter than yesterday there was a TFA complaining about Unicode characters breaking Unix pipes / parsers expecting ASCII commands).
It's ultra annoying during debugging sessions (that is not when in a full agentic loop where it YOLOs a solution): you can't easily cut/paste from the CLI because the output you get is not what the model did output.
Mega, mega, mega annoying.
What should be something simple becomes a rube-goldberg machinery that, of course, fucks up something fundamental: converting the model's characters to something else is just pathetically bad.
Anyone from Anthropic reading? Get your shit together: if you keep this "headless browser rendering converted to text", at least do not fucking modify the characters.*
No it is not. Ink does not use a browser.
[flagged]