I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy. Tons of whitespace, line art, widgets, ascii art and gradients, and now apparently animations. And then what you don't get is the full suite of expected keybindings, tab completion, consistent scrollback, or even flicker-free text rendering. (At least this one seems to not be written with node.js, so I guess there's some chance that the terminal output is optimized to minimize large redraws?).
So they just don't tend to work at all like you'd expect a REPL or a CLI to work, despite having exactly the same interaction model of executing command prompts. But they also don't feel at all like fullscreen Unix TUIs normally would, whether we're talking editors or reader programs (mail readers, news readers, browsers).
Is this just all the new entrants copying Claude Code, or did this trend get started even earlier than that? (This is one of the reasons Aider is still my go-to; it looks and feels the way a REPL is supposed to.)
Well this specific tool is by a company called Charm that has the mission statment of making the command line glamerous. They have been around for several years prior to the LLM craze.
They make a CLI framework for golang along with tools for it.
I came to reply this. They have been building very cool CLI projects and those projects end up composing new bigger projects. This is their last one (That I know of) which use most of the other projects they created before.
They didn't do it flashy for this project specifically (like Claude Code, which I don't think is flashy at all) but every single one of their other projects are like this.
also text UIs have always attempted a sense of flashiness? your BIOS, DOS UI's back in the day, etc. the OP sounds oddly jaded by it for no particular reason?
What bothers me is that what I like about terminals is the scrolling workflow of writing commands and seeing my actions and outputs from various sources and programs sequentially in a log. So what I want is a rich full-HTML multi-program scrolling workflow. Instead, people are combining the worst of both worlds. What are they doing? Give me superior UI in a superior rendering system, not inferior UI in an inferior rendering system, god damn it.
You can run it inside the terminal while still using your code editor with full support for diffs and undo. It works seamlessly with IDE like Cursor AI or VSCode, allowing multiple agents to work on different tasks at the same time, such as backend and frontend. The agents can also read each other’s rules, including Cursor rules and Crush Markdown files.
I suspect some of it is that these interfaces are rapidly gaining adherents (and developers!) whose preference and accustomed usage is more graphically IDE-ish editors. Not everyone lives their life in a terminal window, even amongst devs. (Or so I’m told; I still have days where I don’t bother starting X/Wayland)
you are showing how young you are. ;-) I'm glad this is back as someone that grew up in the BBS era, colorful text based stuff brings back joyful memories. I'm building my own terminal CLI coding agent. My plan is to make it this colorful with ascii art when I'm done, I'm focused on features now.
Well, they all seem to have issues with multi-line selection, as those get all messed up with decorations, panes and whatever noise is there. To best of my awareness, the best a TUI can do is to implement its own selection (so, alt-screen, mouse tracking, etc. - plenty of stuff to handle, including all the compatibility quirks) and use OSC 52 for clipboard operations, but that loses the native look-and-feel and terminal configuration.
(Technically, WezTerm's semantic zones should be the way to solve this for good - but that's WezTerm-only, I don't think any other terminal supports those.)
On the other hand, with GUIs this is not an issue at all. And YMMV, but for me copying snippets, bits of responses and commands is a very frequent operation for any coding agent, TUI, GUI or CLI.
I've been drafting a blog post about their pros and cons. You're right, text input doesn't feel like a true REPL, probably because they're not using readline. And we see more borders and whitespace because people can afford the screen space.
But there's perks like mouse support, discoverable commands, and color cues. Also, would you prefer someone make a mediocre GUI or a mediocre GUI for your workflows?
For what it's worth, this is exactly why I am working on Jean-Pierre[0], pitched as:
> A command-line toolkit to support you in your daily work as a software programmer. Built to integrate into your existing workflow, providing a flexible and powerful pair-programming experience with LLMs.
The team behind DCD[1] are funding my work, as we see a lot of potential in a local-first, open-source, CLI-driven programming assistant for developers. This is obviously a crowded field, and growing more crowded by the day, but we think there's still a lot of room for improvement in this area.
We're still working on a lot of the fundamentals, but are moving closer to supporting agentic workflows similar to Claude Code, but built around your existing workflows, editors and tools, using the Unix philosophy of DOTADIW.
We're not at a state where we want to promote it heavily, as we're introducing breaking changes to the file format almost daily, but once we're a bit further along, we hope people find it as useful as we have in the past couple of months, integrating it into our existing terminal configurations, editors and local shell scripts.
I have a sneaking suspicion claude code is tui just because that’s more convenient for running on ephemeral vms (no need to load a desktop os, instant ssh compatibility) and that they didn’t realize everyone would be raw dogging —dangerously-no-permissions on their laptop’s bare metal OS
Uhm, you forgot ANSI animations from BBS, stuff like the BB demo from AALIB, aafire, midnight commander with tons of colours, mocp with the same...
Flashy stuff for the terminal it's not new. Heck, in late 90's/early 00's everyone tired e17 and eterm at least once. And then KDE3 with XRender extensions with more fancy stuff on terminals
and the like, plus compositor effects with xcompmgr and later, compiz.
But I'm old fashioned. I prefer iomenu+xargs+nvi and custom macros.
One nice thing about this is that it's early days for this, and the code is really clear and schematic, so if you ever wanted a blueprint for how to lay out an agent with tool calls and sessions and automatic summarization and persistence, save this commit link.
The big question - which one of these new agents can consume local models to a reasonable degree? I would like to ditch the dependency on external APIs - willing to trade some performance in lieu.
I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.
Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.
Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.
it will add the feature, I saw openAI make that claim that developers are adding their own features, saw Anthrophic make the same claim, and Aider's paul often says Aider wrote most of the code. I started building my own coding CLI for the fun of it, and then I thought, why not have it start developing features, and it does too. It's as good as the model. For ish and giggles, I just downloaded crush, pointed it to a local qwen3-30b-a3b which is a very small model and had it load the code, refactor itself and point bugs. I have never used LSP, and just wanted to see how it performs compared to treesitter.
One of the difficulties -- and one that is currently a big problem in LLM research -- is that comparisons with or evaluations of commercial models are very expensive. I co-wrote a paper recently and we spent more than $10,000 on various SOTA commercial models in order to evaluate our research. We could easily (an cheaply) show that we were much better than open-weight models, but we knew that reviewers would ding us if we didn't compare to "the best."
Even aside from the expense (which penalizes universities and smaller labs), I feel it's a bad idea to require academic research to compare itself to opaque commercial offerings. We have very little detail on what's really happening when OpenAI for example does inference. And their technology stack and model can change at any time, and users won't know unless they carefully re-benchmark ($$$) every time you use the model. I feel that academic journals should discourage comparisons to commercial models, unless we have very precise information about the architecture, engineering stack, and training data they use.
- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.
- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.
- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.
- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.
- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.
- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.
- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.
- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.
- For a time, two competing projects named OpenCode exist, causing significant community confusion.
- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.
The performance not only depends on the tool, it also depends on the model, and the codebase you are working on (context), and the task given (prompt).
And all these factors are not independent. Some combinations work better than others. For example:
- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.
- Gemini 2.5 Pro works better for big fixes on frontend react codebases.
...
So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.
16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.
- Can't combine models. Claude Code using a combination of Haiku for menial search stuff and Sonnet for thinking is nice.
- Adds a lot of unexplained junk binary files in your directory. It's probably in the docs somewhere I guess.
- The initial init makes some CHARM.md that tries to be helpful, but everything it had did not seem like helpful things I want the model to know. Simple stuff, like, my Go tests use PascalCasing, e.g. TestCompile.
Oh god please no... can we please just agree on a standard for a well-known single agent instructions file, like AGENT.md [1] perhaps (and yes, this is the standard being shilled by Amp for their CLI tool, I appreciate the irony there). Otherwise we rely on hacks like this [2]
I’ve been playing with Crush over the past few weeks and I’m genuinely bullish on its potential.
I've been following Charm for some time and they’re one of the few groups that get DX and that consistently ship tools that developers love. Love seeing them joining the AI coding race. Still early days, but this is clearly a tool made by people who actually use it.
Another one, but indeed very nice looking. Will definitely be testing it.
What I miss from all of these (EDIT: I see opencode has this for github) is the lack of being able to authenticate with the monthly paid services; github copilot, claude code, openai codex, cursor etc etc
That would be the best addition; I have these subscriptions and might not like their interfaces, so it would be nice to be able to switch.
I don't think most of these allow other tools to "use" the monthly subscription. Because of that you need an API key and have to pay per tokens. Even Claude code for a while did not use your Claude subscription.
But now they have a subscription for claude code , copilot has a sub and some others too. They might not allow it, but whatever; we are paying, so what's the big deal.
I'm not really into golang, but if I read this [1] correctly, they seem to append the LSP stuff to every prompt, and automatically after each tool that supports it? It seems a bit more "integrated" than just an MCP.
Woah I love the UI. Compared to the other coding agents I've used (eg. Claude Code, aider, opencode) this feels like the most enjoyable to use so far..
Anyone try switching LLM providers with it yet? That's something I've noticed to be a bit buggy with other coding agents
Is this the company that did shady things by buying an open source repo and kicking out the contributors? Something to do with OpenCode or SST or something idk. Could be a different company ?
An unfortunate clash. I can say from experience that the sst version has a lot of issues that would benefit from more manpower, even though they are working hard. If only they could resolve their differences.
I’m definitely interested as well. This is the other side of the sst/charm ‘opencode-ai’ fork we’ve been expecting, and I can’t wait to see how they are differentiating. Talented teams on all sides, glad to see indie dev shops getting involved (guess you could include Warp or Sourcegraph here as well, though their funding models are quite different).
One big benefit of opencode is that it lets you authenticate to GitHub Copilot. This lets you switch between all the various models Copilot supports, which is really nice.
sucks that i can't use any of these because claude code has me in golden handcuffs. I don't care about the cli but for a hobbyist i can't afford to call llm apis directly.
I've been meaning to try out Opencode on the basis of this comment from a few weeks back where one of the devs indicated that Claude Pro subscriptions worked with Opencode:
> opencode kinda cheats by using Antropic client ID and pretending to be Claude Code, so it can use your existing subscription. [1]
I'd definitely like to see Anthropic provide a better way for the user's choice of clients to take advantage of the subscription. The way things stand today, I feel like I'm left with no choice but to stick to Claude Code for sonnet models and try out cool tools like this one with local models.
Now, with all that said, I did recently have Claude code me up a POC where I used Playwright to automate the Claude desktop app, with the idea being that you could put an API in front of it and take advantage of subscription pricing. I didn't continue messing with it once the concept was proved, but I guess if you really wanted to you could probably hack something together (though I imagine you'd be giving up a lot by ramming interactions through Claude Desktop in this manner). [2]
I thought Claude Code (sub) could work with alternate UIs, no? Eg doesn't Neovim have a Claude Code plugin? I want to say there are one or two more as well.
Though i think in Neovims case they had to reverse engineer the API calls for Claude Code. Perhaps that's against the TOS.
Regardless i have the intention to make something similar, so hopefully it's not against the TOS lol.
This one feels refreshing. It’s written in Go, and the TUI is pretty slick. I’ve been running Qwen Coder 3 on a GPU cluster with 2 B200s at $2 per hour, getting 320k context windows and burning through millions of tokens without paying closed labs for API calls.
One thing I'm curious about: Assuming you're using the same underlying models, and putting obvious pricing differences aside: What is the functional difference between e.g. Charm and Claude Code? And Cursor, putting aside the obvious advantages running in a GUI brings and that integration.
Is there secret sauce that would make one better than the other? Available tools? The internal prompting and context engineering that the tool does for you? Again, assuming the model is the same, how similar should one expect the output from one to another be?
Yeah and as far as I know, both Claude Code and obviously Crush here are open source. Cursor isn't, but their code is probably just sitting in javascript in the application bundle and should be reversible if it actually mattered?
If anything it makes me hate it more, because now you have a variety of build systems, even more node_modules heaviness, and ample opportunities for supply chain attacks via opaque transpiled npm packages.
Played with it a bit. So far it lacks some key functionality for my use case: I need to be able to launch an interactive session with a prefilled prompt. I like to spawn tmux sessions running an agent with a prompt in a single command, and then check in on it later if any followup prompting is needed.
Other papercuts: no up/down history, and the "open editor" command appears to do nothing.
Still, it's a _ridiculously_ pretty app. 5 stars. Would that all TUIs were this pleasing to look at.
Wondered how claude code would look like if it was built by the people over
at charmbracelet. I suppose this is it
The terminal is so ideal for agentic coding and the more interactive the better.
I personally would like to be able to enter multiline text in a more intuitive
way in claude code. This nails it.
Trying this on windows after installing from npm and when it asks for my chatgpt api key, doesn't seem to let me paste it, or type anything, or submit at all. Just sits there until I either go back or force quit.
edit: setting the key as an env variable works tho.
Claude Code and Gemini CLI (and OpenAI Codex) are first party from the respective companies. But also kind of products - in extreme cases people pay $200/month for Claude Code and get $thousands and thousands of usage. There's product bundling there beyond just the interface.
I think Claude Code specifically has a reputation for being a 1st class citizen - as in the model is trained and evalled on that specific toolcall syntax.
I think you're pretty clueless to make that claim, they bought something, not stole, and one out of the 3 core contributors (who is also the original creator or the project) agreed. You should form an opinion based on logic and facts, not who you follow.
Could you provide some specific details about what is missing? I've been super busy studying and haven't been able to keep up with the gap between Aider and other tools. thank you!
I don't get why terminal agents are so popular of late. Having spent more than a decade in terminal based development (vi*), and now fully moved over to a real IDE (vs code), it seems bonkers to me. The IDE is so much more... integrated
At this point, TUI's still feel like the most streamlined interface for coding agents. They're inherently lighter weight, and generally more true to the context of dev environments.
"Feels like" is a subjective measure. For example, Gemini CLI does feel inherently lighter than something like VS Code. But why should it? It's just a chat interface with a different skin.
I'm also not sure whether Gemini CLI is actually better aligned with the context of development environments.
Anyway—slightly off-topic here:
I’m using Gemini CLI in exactly the same way I use VS Code: I type to it. I’ve worked with a lot of agents across different projects—Gemini CLI, Copilot in all its LLM forms, VS Code, Aider, Cursor, Claude in the browser, and so on. Even Copilot Studio and PowerAutomate—which, by the way, is a total dumpster fire.
From simple code completions to complex tasks, using long pre-prompts or one-shot instructions—the difference in interaction and quality between all these tools is minimal. I wouldn’t even call it a meaningful difference. More like a slight hiccup in overall consistency.
What all of these tools still lack, here in year three of the hype: meaningful improvements in coding endurance or quality. None of them truly stand out—at least not yet.
I like them because the interface is consistent regardless of what editor/IDE I'm using. Also frequently I use it to do stuff like convert files, or look at a structure and then make a shell script to modify it in some way, in which case an IDE is just overhead, and the output is just something I would run in the terminal anyway.
For me, a terminal environment means I can use any tool or tech, without it being compatible with the IDE. Editors, utilities, and runtimes can be chosen, and I'm responsible for ensuring they can interop.
IDEs being convenience by integrating all of that, so the choice is up to the user: A convenient self contained environment, vs a more custom self assembled one.
VS Code has the terminal(s) right there, I'm not missing out on any tool or tech
What I don't have to do is context switch between applications or interfaces
In other comments I relayed the sentiment that I enjoy not having to custom assemble a dev environment and spend way too much time making sure it works again after some plugin updates or neovim changes their APIs and breaks a bunch of my favorite plugins
Because integrating directly with very large varities of editors & environments is actually kind of hard? Everyone has their own favorite development environment, and by pulling the LLM agents into a separate area (i.e. a terminal app) then you can quickly get to "works in all environments". Additionally, this also implies "works with no dev environment at all". For example, vibe coding a simple HTML only webpage. All you need is terminal+browser.
All of the IDEs already have the AI integrations, so there's no work to do. It's not like you don't have to do the equivalent work for a TUI as an IDE for integration of a new model, it's the same config for that task.
> works with no dev environment at all
The terminal is a dev environment, my IDE has it built in. Copilot can read both the terminal and the files in my project, it even opens them and shows me the diff as it changes them. No need to switch context between where I normally code and some AI tool. These TUIs feel like the terminal version of the webapp, where I have to go back and forth between interfaces.
Not new to AI agents, either. I'm sure you can set up vim to be like an IDE, but unless you're coding over ssh, I don't know why it's preferable to an actual IDE (even one with vim bindings). GUIs are just better for many things.
If the optimal way to do a particular thing is a grid of rectangular characters with no mouse input, nothing prevents you having one of those in your GUI where it makes sense.
For instance, you can look up the documentation for which keys to press to build your project in your TUI IDE, or you can click the button that says "build" (and hover over the button to see which key to press next time). Why is typing :q<enter> better than clicking the "X" in the top-right corner? Obviously, the former works over ssh, but that's about it.
Slowness is an implementation detail. If MSVC6 can run fast enough on a computer from 1999 (including parsing C++) then we should be able to run things very fast today.
It seems like you might have missed the gap between vi and modern terminal based development. Neovim with plugins is absolutely amazing and integrated, there are even options like Lazyvim that do all the work for you. I took the opposite journey and went from IDE to Neovim and I'm glad I did. vs code is a bunch of stuff badly cobbled together in a web app, running in Electron. It's a resource hog and it gets quite slow in big projects. Neovim had a much higher learning curve but is so much more powerful than vs code or even jetbrain stuff in my opinion and so much snappier too
> It seems like you might have missed the gap between vi and modern terminal based development.
No, I used neovim and spent way too much time trying to turn it into an IDE, even with the prepackaged setups out there
VS Code is sitting below 5% CPU and 1G of memory, not seeing the resource hog you are talking about. LSPs typically use more resources (which is outside and the same for both)
I was neovim in the end, 100% agree lua is so much better than vimscript, but now I don't need either. I spend no time trying to match what an IDE can do in the terminal and get to spend that time building the things I'm actually interested in. I recalled Linus saying the reason he (at the time) used Fedora was because it just worked and he could spend his time on the kernel instead of tinkering to get linux working. This is one of the biggest reasons I stopped using (neo)vim
I had lots of problems with plugins in the ecosystem breaking, becoming incompatible with others, or often falling into unmaintained status. Integrations with external SaaS services are much better too
Also information density (and ease of access) as a peer comment has mentioned
For me, the workflow that Claude Code provides via VSCode plugins or even IntelliJ integration is great. TUI for talking to the agent and then some mild GUI gloss around diffs and such.
I like terminal things because they are easy to use in context wherever I need them - whether that's in my shell locally or over SSH, or in the integrated terminal in whatever IDE I happen to be using.
I use vim if I need to make a quick edit to a file or two.
Idk, terminal just seems to mesh nicely into whatever else I'm doing, and lets me use the right tool for the job. Feels good to me.
My VS Code has a terminal and can remote into any machine and edit code / terminal there.
What I don't get is going back to terminal first approaches and why so many companies are putting these out (except that it is probably (1) easy to build (2) everyone is doing it hype cycle). It was similar when everyone was building ChatGPT functions or whatever before MCP came out. I expect the TUI cycle will fade as quickly as it rose
I like them because they're easier to launch multiple instances of and take fewer resources. Being able to fire agents off into tmux sessions to tackle small-fry issues that they can usually oneshot is a powerful tool to fight the decay of a codebase from high prio work constantly pushing out housekeeping.
I think it lets developer concentrate their energy on improving the agentic experience, which matters more right now. It's hard to keep up with all the models, which the developers have to write support code for. Once the products mature, I bet they'll go visual again.
I also use IDEs and I think people who use terminal-based editors are lunatics but I prefer terminal-based coding agents (I don't use them a lot to be fair).
It's easier to see the diff file by file and really control what the AI does IMO.
On another note VS Code is not an IDE, it's a text editor.
Terminal based editors can work as an IDE too, with diff and the like. EMacs it's like that and it has magic, ediff and who knows what. And VIM can do the same, of course.
No, thanks. I prefer the old way. Books, some editor (I like both emacs or nvi), books with exercises, and maybe some autocomplete setup for function names/procedures, token words and the like.
I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy. Tons of whitespace, line art, widgets, ascii art and gradients, and now apparently animations. And then what you don't get is the full suite of expected keybindings, tab completion, consistent scrollback, or even flicker-free text rendering. (At least this one seems to not be written with node.js, so I guess there's some chance that the terminal output is optimized to minimize large redraws?).
So they just don't tend to work at all like you'd expect a REPL or a CLI to work, despite having exactly the same interaction model of executing command prompts. But they also don't feel at all like fullscreen Unix TUIs normally would, whether we're talking editors or reader programs (mail readers, news readers, browsers).
Is this just all the new entrants copying Claude Code, or did this trend get started even earlier than that? (This is one of the reasons Aider is still my go-to; it looks and feels the way a REPL is supposed to.)
Well this specific tool is by a company called Charm that has the mission statment of making the command line glamerous. They have been around for several years prior to the LLM craze.
They make a CLI framework for golang along with tools for it.
That’s right. Charm has been making pretty tuis since the beginning of the group. BubbleTea and VHS are amazing. Everyone should try them.
2 replies →
I came to reply this. They have been building very cool CLI projects and those projects end up composing new bigger projects. This is their last one (That I know of) which use most of the other projects they created before.
They didn't do it flashy for this project specifically (like Claude Code, which I don't think is flashy at all) but every single one of their other projects are like this.
also text UIs have always attempted a sense of flashiness? your BIOS, DOS UI's back in the day, etc. the OP sounds oddly jaded by it for no particular reason?
What bothers me is that what I like about terminals is the scrolling workflow of writing commands and seeing my actions and outputs from various sources and programs sequentially in a log. So what I want is a rich full-HTML multi-program scrolling workflow. Instead, people are combining the worst of both worlds. What are they doing? Give me superior UI in a superior rendering system, not inferior UI in an inferior rendering system, god damn it.
You can run it inside the terminal while still using your code editor with full support for diffs and undo. It works seamlessly with IDE like Cursor AI or VSCode, allowing multiple agents to work on different tasks at the same time, such as backend and frontend. The agents can also read each other’s rules, including Cursor rules and Crush Markdown files.
Say more about what you mean by "multi-program scrolling workflow", if you don't mind
8 replies →
Nah, this type of text UI has been charmbracelet's whole thing since before AI agents appeared.
I quite like them, unlike traditional TUIs, the keybindings are actually intuitively discoverable, which is nice.
I suspect some of it is that these interfaces are rapidly gaining adherents (and developers!) whose preference and accustomed usage is more graphically IDE-ish editors. Not everyone lives their life in a terminal window, even amongst devs. (Or so I’m told; I still have days where I don’t bother starting X/Wayland)
At least one can use Claude Code within emacs: https://github.com/stevemolitor/claude-code.el
You can also just run it in vterm
2 replies →
you are showing how young you are. ;-) I'm glad this is back as someone that grew up in the BBS era, colorful text based stuff brings back joyful memories. I'm building my own terminal CLI coding agent. My plan is to make it this colorful with ascii art when I'm done, I'm focused on features now.
They are easier to make than full-fledged user interfaces, so you get to see more of them.
Well, they all seem to have issues with multi-line selection, as those get all messed up with decorations, panes and whatever noise is there. To best of my awareness, the best a TUI can do is to implement its own selection (so, alt-screen, mouse tracking, etc. - plenty of stuff to handle, including all the compatibility quirks) and use OSC 52 for clipboard operations, but that loses the native look-and-feel and terminal configuration.
(Technically, WezTerm's semantic zones should be the way to solve this for good - but that's WezTerm-only, I don't think any other terminal supports those.)
On the other hand, with GUIs this is not an issue at all. And YMMV, but for me copying snippets, bits of responses and commands is a very frequent operation for any coding agent, TUI, GUI or CLI.
this is debatable, a proper TUI has the same complexities as conventional UIs + legacy rendering.
Flashy TUIs have been around for a few years. Check out the galleries for TUI frameworks:
https://ratatui.rs/showcase/apps/
https://github.com/charmbracelet/bubbletea/tree/main/example...
https://textual.textualize.io/
I've been drafting a blog post about their pros and cons. You're right, text input doesn't feel like a true REPL, probably because they're not using readline. And we see more borders and whitespace because people can afford the screen space.
But there's perks like mouse support, discoverable commands, and color cues. Also, would you prefer someone make a mediocre GUI or a mediocre GUI for your workflows?
It feels like going back to NC again (Norton Commander)
For what it's worth, this is exactly why I am working on Jean-Pierre[0], pitched as:
> A command-line toolkit to support you in your daily work as a software programmer. Built to integrate into your existing workflow, providing a flexible and powerful pair-programming experience with LLMs.
The team behind DCD[1] are funding my work, as we see a lot of potential in a local-first, open-source, CLI-driven programming assistant for developers. This is obviously a crowded field, and growing more crowded by the day, but we think there's still a lot of room for improvement in this area.
We're still working on a lot of the fundamentals, but are moving closer to supporting agentic workflows similar to Claude Code, but built around your existing workflows, editors and tools, using the Unix philosophy of DOTADIW.
We're not at a state where we want to promote it heavily, as we're introducing breaking changes to the file format almost daily, but once we're a bit further along, we hope people find it as useful as we have in the past couple of months, integrating it into our existing terminal configurations, editors and local shell scripts.
[0]: https://github.com/dcdpr/jp [1]: https://contract.design
DOTADIW = Do One Thing And Do It Well
^ for the uninitiated
People have been making TUIs since time immemorial.
Discover the joys of Turbo Vision and things like Norton Commander, DOS Navigator, Word Perfect etc.
They problem is that most current tools can neither do the TUI right or the terminal part right.
I wouldn’t describe those traditional TUIs as trying to be flashy, though. They were largely utilitarian.
1 reply →
I have a sneaking suspicion claude code is tui just because that’s more convenient for running on ephemeral vms (no need to load a desktop os, instant ssh compatibility) and that they didn’t realize everyone would be raw dogging —dangerously-no-permissions on their laptop’s bare metal OS
> I find it strange how most of these terminal-based AI coding agents have ended up with these attempts at making text UIs flashy.
Its next gen script kids.
If true, GOOD.
I 100% unironically believe we're better off more script kiddies today, not fewer.
5 replies →
Uhm, you forgot ANSI animations from BBS, stuff like the BB demo from AALIB, aafire, midnight commander with tons of colours, mocp with the same...
Flashy stuff for the terminal it's not new. Heck, in late 90's/early 00's everyone tired e17 and eterm at least once. And then KDE3 with XRender extensions with more fancy stuff on terminals and the like, plus compositor effects with xcompmgr and later, compiz.
But I'm old fashioned. I prefer iomenu+xargs+nvi and custom macros.
1 reply →
why is it strange? they're making it look slick and powerful and scifi and cool, as a - extremely successful - marketing gimmick.
One nice thing about this is that it's early days for this, and the code is really clear and schematic, so if you ever wanted a blueprint for how to lay out an agent with tool calls and sessions and automatic summarization and persistence, save this commit link.
The commit link is https://github.com/charmbracelet/crush/releases/tag/v0.1.8
Thanks for the tip! I trust your judgement so this repo just got more interesting for me.
For anyone else who wants to actually be able to _read_ what's happening in the demo GIF, I slowed it down in ffmpeg and converted it to video form:
https://share.cleanshot.com/XBXQbSPP
Low effort comment but an upvote felt inadequate: thanks!
The big question - which one of these new agents can consume local models to a reasonable degree? I would like to ditch the dependency on external APIs - willing to trade some performance in lieu.
Crush has an open issue (2 weeks) to add Ollama support - it's in progress.
They should add "custom endpoint" support instead [0].
[0] https://github.com/microsoft/vscode/issues/249605
FYI it works already even without this feature branch (you'll just have to add your provider and models manually)
```
{
}
```
why?
it's basic, edit the config file. I just downloaded it, ~/.cache/share/crush/providers.json add your own or edit an existing one
Edit api_endpoint, done.
nice, that would be my reason to use Crush.
1 reply →
Most of these agents work with any OpenAI compatible endpoints.
Actually not really.
I spent at least an hour trying to get OpenCode to use a local model and then found a graveyard of PRs begging for Ollama support or even the ability to simply add an OpenAI endpoint in the GUI. I guess the maintainers simply don't care. Tried adding it to the backend config and it kept overwriting/deleting my config. Got frustrated and deleted it. Sorry but not sorry, I shouldn't need another cloud subscription to use your app.
Claude code you can sort of get to work with a bunch of hacks, but it involves setting up a proxy and also isn't supported natively and the tool calling is somewhat messed up.
Warp seemed promising, until I found out the founders would rather alienate their core demographic despite ~900 votes on the GH issue to allow local models https://github.com/warpdotdev/Warp/issues/4339. So I deleted their crappy app, even Cursor provides some basic support for an OpenAI endpoint.
6 replies →
OpenHands let's you set any LLM you want. https://github.com/All-Hands-AI/OpenHands
Aider says they do, but I haven’t tried it.
https://aider.chat/docs/llms.html
Aider has built in support for lm studio endpoints.
https://aider.chat/docs/llms/lm-studio.html
What happens if you just point it at its own source and ask it to add the feature?
it will add the feature, I saw openAI make that claim that developers are adding their own features, saw Anthrophic make the same claim, and Aider's paul often says Aider wrote most of the code. I started building my own coding CLI for the fun of it, and then I thought, why not have it start developing features, and it does too. It's as good as the model. For ish and giggles, I just downloaded crush, pointed it to a local qwen3-30b-a3b which is a very small model and had it load the code, refactor itself and point bugs. I have never used LSP, and just wanted to see how it performs compared to treesitter.
all of them, you can even use claude-code with a local model
sst/opencode
But only a few models can actually execute commands effectively.. what is it, Claude and Gemini? Did I miss any?
4 replies →
has a ton of bugs
1 reply →
I would love a comparison between all these new tools, like this with Claude Code, opencode, aider and cortex.
I just can’t get an easy overview of how each tool works and is different
One of the difficulties -- and one that is currently a big problem in LLM research -- is that comparisons with or evaluations of commercial models are very expensive. I co-wrote a paper recently and we spent more than $10,000 on various SOTA commercial models in order to evaluate our research. We could easily (an cheaply) show that we were much better than open-weight models, but we knew that reviewers would ding us if we didn't compare to "the best."
Even aside from the expense (which penalizes universities and smaller labs), I feel it's a bad idea to require academic research to compare itself to opaque commercial offerings. We have very little detail on what's really happening when OpenAI for example does inference. And their technology stack and model can change at any time, and users won't know unless they carefully re-benchmark ($$$) every time you use the model. I feel that academic journals should discourage comparisons to commercial models, unless we have very precise information about the architecture, engineering stack, and training data they use.
you have the separate the model , from the interface, imho.
you can totally evaluate these as GUI's, and CLI's and TUI's with more or less features and connectors.
Model quality is about benchmarks.
aider is great at showing benchmarks for their users
gemini-cli now tells you % of correct tools ending a session
This used to be opencode but was renamed after some fallout between the devs I think.
If anyone is curious on the context:
https://x.com/thdxr/status/1933561254481666466 https://x.com/meowgorithm/status/1933593074820891062 https://www.youtube.com/watch?v=qCJBbVJ_wP0
Gemini summary of the above:
- Kujtim Hoxha creates a project named TermAI using open-source libraries from the company Charm.
- Two other developers, Dax (a well-known internet personality and developer) and Adam (a developer and co-founder of Chef, known for his work on open-source and developer tools), join the project.
- They rebrand it to OpenCode, with Dax buying the domain and both heavily promoting it and improving the UI/UX.
- The project rapidly gains popularity and GitHub stars, largely due to Dax and Adam's influence and contributions.
- Charm, the company behind the original libraries, offers Kujtim a full-time role to continue working on the project, effectively acqui-hiring him.
- Kujtim accepts the offer. As the original owner of the GitHub repository, he moves the project and its stars to Charm's organization. Dax and Adam object, not wanting the community project to be owned by a VC-backed company.
- Allegations surface that Charm rewrote git history to remove Dax's commits, banned Adam from the repo, and deleted comments that were critical of the move.
- Dax and Adam, who own the opencode.ai domain and claim ownership of the brand they created, fork the original repo and launch their own version under the OpenCode name.
- For a time, two competing projects named OpenCode exist, causing significant community confusion.
- Following the public backlash, Charm eventually renames its version to Crush, ceding the OpenCode name to the project now maintained by Dax and Adam.
3 replies →
yea two of the devs did a crazy rug pull
1 reply →
The performance not only depends on the tool, it also depends on the model, and the codebase you are working on (context), and the task given (prompt).
And all these factors are not independent. Some combinations work better than others. For example:
- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.
- Gemini 2.5 Pro works better for big fixes on frontend react codebases.
...
So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.
16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.
https://eval.16x.engineer/
Played around with it for a serious task for 15 mins. Compared to Claude Code:
Pros:
- Beautiful UI
- Useful sidebar, keep track of changed files, cost
- Better UX for accepting changes (has hotkeys, shows nicer diff)
Cons:
- Can't combine models. Claude Code using a combination of Haiku for menial search stuff and Sonnet for thinking is nice.
- Adds a lot of unexplained junk binary files in your directory. It's probably in the docs somewhere I guess.
- The initial init makes some CHARM.md that tries to be helpful, but everything it had did not seem like helpful things I want the model to know. Simple stuff, like, my Go tests use PascalCasing, e.g. TestCompile.
- Ctrl+C to exit crashed my terminal.
> The initial init makes some CHARM.md
Oh god please no... can we please just agree on a standard for a well-known single agent instructions file, like AGENT.md [1] perhaps (and yes, this is the standard being shilled by Amp for their CLI tool, I appreciate the irony there). Otherwise we rely on hacks like this [2]
[1] https://ampcode.com/AGENT.md
[2] https://kau.sh/blog/agents-md/
imo they should all support AGENT.md, but because of differences in tools, you often need an additional file per-agent.
It's "glamorous", even in UK English.
https://dictionary.cambridge.org/dictionary/english/glamorou...
I’ve been playing with Crush over the past few weeks and I’m genuinely bullish on its potential.
I've been following Charm for some time and they’re one of the few groups that get DX and that consistently ship tools that developers love. Love seeing them joining the AI coding race. Still early days, but this is clearly a tool made by people who actually use it.
Another one, but indeed very nice looking. Will definitely be testing it.
What I miss from all of these (EDIT: I see opencode has this for github) is the lack of being able to authenticate with the monthly paid services; github copilot, claude code, openai codex, cursor etc etc
That would be the best addition; I have these subscriptions and might not like their interfaces, so it would be nice to be able to switch.
I don't think most of these allow other tools to "use" the monthly subscription. Because of that you need an API key and have to pay per tokens. Even Claude code for a while did not use your Claude subscription.
But now they have a subscription for claude code , copilot has a sub and some others too. They might not allow it, but whatever; we are paying, so what's the big deal.
Opencode does
> LSP-Enhanced: Crush uses LSPs for additional context, just like you do
This is the most interesting feature IMO, interested to see how this pans out. The multiple sessions / project also seems interesting.
There are LSP MCPs so you can use them with other agents too.
I'm not really into golang, but if I read this [1] correctly, they seem to append the LSP stuff to every prompt, and automatically after each tool that supports it? It seems a bit more "integrated" than just an MCP.
[1] - https://github.com/charmbracelet/crush/blob/317c5dbfafc0ebda...
1 reply →
Woah I love the UI. Compared to the other coding agents I've used (eg. Claude Code, aider, opencode) this feels like the most enjoyable to use so far.. Anyone try switching LLM providers with it yet? That's something I've noticed to be a bit buggy with other coding agents
Bubble Tea has always been an amazing TUI. I find React TUI (which is what Claude Code uses) to be buggy and always have to work against it.
Agreed. Charm has a solid track record of great TUIs. While I appreciate a good DSL, I don't think React for a TUI (via ink) is working out well.
Yes, me too. The inline syntax highlighting is very nice. I hope CC steals liberally.
Charmbracelet is amazing. Will there be an equivalent of Claude Code's CLAUDE.md files?
Please use https://agent.md/ https://github.com/agentmd/agent.md
Nice, this definitely needs to be standardized
it's CRUSH.md https://github.com/charmbracelet/crush/blob/main/CRUSH.md
Is this the company that did shady things by buying an open source repo and kicking out the contributors? Something to do with OpenCode or SST or something idk. Could be a different company ?
Yes, this is that company. This is the "original":
https://github.com/sst/opencode
Thanks Charm team for jumping on my feature request so I can really put Crush through its paces with https://container-use.com. https://github.com/charmbracelet/crush/issues/424 https://github.com/charmbracelet/crush/pull/443
looks cool - has anyone compared it to opencode[^1] yet?
[1]: https://github.com/sst/opencode
Looks like this was the other opencode[0] and got (sensibly) rebranded:
[0]: https://github.com/opencode-ai/opencode
Exactly right. Was this one just open sourced? I don't remember seeing it when the SST opensource debacle broke. They're both under heavy development:
https://github.com/charmbracelet/crush/pulse/monthly
https://github.com/sst/opencode/pulse/monthly
An unfortunate clash. I can say from experience that the sst version has a lot of issues that would benefit from more manpower, even though they are working hard. If only they could resolve their differences.
I’m definitely interested as well. This is the other side of the sst/charm ‘opencode-ai’ fork we’ve been expecting, and I can’t wait to see how they are differentiating. Talented teams on all sides, glad to see indie dev shops getting involved (guess you could include Warp or Sourcegraph here as well, though their funding models are quite different).
One big benefit of opencode is that it lets you authenticate to GitHub Copilot. This lets you switch between all the various models Copilot supports, which is really nice.
What if you don’t have a copilot plan, can you still authenticate to your GitHub account and get some free tier level services ?
2 replies →
They mention FreeBSD support in the README so that adds a couple points
All of Charmbracelet's open source stuff is Go-based, which supports a bunch of platforms.
This is not open source.
3 replies →
sucks that i can't use any of these because claude code has me in golden handcuffs. I don't care about the cli but for a hobbyist i can't afford to call llm apis directly.
I've been meaning to try out Opencode on the basis of this comment from a few weeks back where one of the devs indicated that Claude Pro subscriptions worked with Opencode:
> opencode kinda cheats by using Antropic client ID and pretending to be Claude Code, so it can use your existing subscription. [1]
I'd definitely like to see Anthropic provide a better way for the user's choice of clients to take advantage of the subscription. The way things stand today, I feel like I'm left with no choice but to stick to Claude Code for sonnet models and try out cool tools like this one with local models.
Now, with all that said, I did recently have Claude code me up a POC where I used Playwright to automate the Claude desktop app, with the idea being that you could put an API in front of it and take advantage of subscription pricing. I didn't continue messing with it once the concept was proved, but I guess if you really wanted to you could probably hack something together (though I imagine you'd be giving up a lot by ramming interactions through Claude Desktop in this manner). [2]
[1]: https://github.com/epiccoleman/claude-automator
I thought Claude Code (sub) could work with alternate UIs, no? Eg doesn't Neovim have a Claude Code plugin? I want to say there are one or two more as well.
Though i think in Neovims case they had to reverse engineer the API calls for Claude Code. Perhaps that's against the TOS.
Regardless i have the intention to make something similar, so hopefully it's not against the TOS lol.
1 reply →
Beautiful terminal interface, well done. For people using Crush, how do you feel it compares to Claude Code or Cursor?
This one feels refreshing. It’s written in Go, and the TUI is pretty slick. I’ve been running Qwen Coder 3 on a GPU cluster with 2 B200s at $2 per hour, getting 320k context windows and burning through millions of tokens without paying closed labs for API calls.
Are you using a service for the GPU cluster?
I'd like to try this out, are you renting on one of the open renting platforms?
how many tk/sec are you getting on that setup especially when you have 100k+ tokens?
One thing I'm curious about: Assuming you're using the same underlying models, and putting obvious pricing differences aside: What is the functional difference between e.g. Charm and Claude Code? And Cursor, putting aside the obvious advantages running in a GUI brings and that integration.
Is there secret sauce that would make one better than the other? Available tools? The internal prompting and context engineering that the tool does for you? Again, assuming the model is the same, how similar should one expect the output from one to another be?
the secret sauce will be the available tools, prompting, context engineering, etc, yup whatever "agentic algorithm" has been built in.
I would honestly think no, what couldn't be reversed engineered eventually? We see this all the time.
Am curious about such results, it's one thing to think it's another to know! :D
Yeah and as far as I know, both Claude Code and obviously Crush here are open source. Cursor isn't, but their code is probably just sitting in javascript in the application bundle and should be reversible if it actually mattered?
I'm happy to see some LLM tooling in Go, I really don't want to touch anything to do with JavaScript/npm/Python if I can help it.
I'm guessing building this is what Charm raised the funds for.
Can’t hate on JS anymore, we have typescript now :)
If anything it makes me hate it more, because now you have a variety of build systems, even more node_modules heaviness, and ample opportunities for supply chain attacks via opaque transpiled npm packages.
3 replies →
Played with it a bit. So far it lacks some key functionality for my use case: I need to be able to launch an interactive session with a prefilled prompt. I like to spawn tmux sessions running an agent with a prompt in a single command, and then check in on it later if any followup prompting is needed.
Other papercuts: no up/down history, and the "open editor" command appears to do nothing.
Still, it's a _ridiculously_ pretty app. 5 stars. Would that all TUIs were this pleasing to look at.
There is history scrolling, you have to focus on the chat with tab first.
Looks "Glamourous" but lacks the basics:
Up / down history Copy text
Other than these issues feels much nicer than Claude Code since the screen does not shake violently.
Wondered how claude code would look like if it was built by the people over at charmbracelet. I suppose this is it
The terminal is so ideal for agentic coding and the more interactive the better. I personally would like to be able to enter multiline text in a more intuitive way in claude code. This nails it.
Is Groq still “free” ? Anyone tested Crush with a Groq free key to see how mileage you can get out of it?
Trying this on windows after installing from npm and when it asks for my chatgpt api key, doesn't seem to let me paste it, or type anything, or submit at all. Just sits there until I either go back or force quit.
edit: setting the key as an env variable works tho.
Other than switching LLMs, if I'm already mostly using Claude, any reason to use this over CC?
One problem with these agents is that the tokens don't count for your Claude Max subscription. (Same reason I use CC instead of Zed's AI agent.)
opencode allows auth via Anthropic to enable claude max subscription.
So what are the differences between this one, Claude Code and Gemini CLI?
Claude Code and Gemini CLI (and OpenAI Codex) are first party from the respective companies. But also kind of products - in extreme cases people pay $200/month for Claude Code and get $thousands and thousands of usage. There's product bundling there beyond just the interface.
I think Claude Code specifically has a reputation for being a 1st class citizen - as in the model is trained and evalled on that specific toolcall syntax.
Claude Code uses Claude, Gemini CLI uses Gemini, and this one can be configured to use any model you want.
How to handle an OSS dispute where you're the baddy: rebrand and hope nobody notices.
Let's not forget they're the company that bought an OSS project, OpenCode, and tried to "steal" it
I think you're pretty clueless to make that claim, they bought something, not stole, and one out of the 3 core contributors (who is also the original creator or the project) agreed. You should form an opinion based on logic and facts, not who you follow.
You're talking about the wrong company. This is not the company you think it is. These are the creators of `bubbletea`, a popular TUI framework in Go.
it is the same company
I'm unfamiliar with this. How would they steal it if they bought the open source project?
can you slow down the gif
Yes, someone has. See https://news.ycombinator.com/item?id=44738004
Very interesting to see so many new TUI tools for llm.
Opencode allows auth via Claude Max, which is a huge plus over requiring API (ANTHROPIC_API_KEY)
Can you integrate this with/forward through a GitHub Copilot subscription?
I am starring this just for aesthetic absolutely nailed it.
Why does this require XCode 26 to be installed instead of XCode 16?
Silly
Someone please make/release a Rust CLI. OpenAI what are you doing with Codex?
https://github.com/bosun-ai/kwaak
maybe the oldest and most popular is written in rust! https://github.com/sigoden/aichat
Check out the Q CLI. It's an open source terminal coding agent written in Rust.
https://github.com/aws/amazon-q-developer-cli
https://github.com/block/goose
Why not use Aider?
Aider is not agentic.
Could you provide some specific details about what is missing? I've been super busy studying and haven't been able to keep up with the gap between Aider and other tools. thank you!
3 replies →
Oh, it’s by Charm_!
I don't get why terminal agents are so popular of late. Having spent more than a decade in terminal based development (vi*), and now fully moved over to a real IDE (vs code), it seems bonkers to me. The IDE is so much more... integrated
At this point, TUI's still feel like the most streamlined interface for coding agents. They're inherently lighter weight, and generally more true to the context of dev environments.
"Feels like" is a subjective measure. For example, Gemini CLI does feel inherently lighter than something like VS Code. But why should it? It's just a chat interface with a different skin.
I'm also not sure whether Gemini CLI is actually better aligned with the context of development environments.
Anyway—slightly off-topic here:
I’m using Gemini CLI in exactly the same way I use VS Code: I type to it. I’ve worked with a lot of agents across different projects—Gemini CLI, Copilot in all its LLM forms, VS Code, Aider, Cursor, Claude in the browser, and so on. Even Copilot Studio and PowerAutomate—which, by the way, is a total dumpster fire.
From simple code completions to complex tasks, using long pre-prompts or one-shot instructions—the difference in interaction and quality between all these tools is minimal. I wouldn’t even call it a meaningful difference. More like a slight hiccup in overall consistency.
What all of these tools still lack, here in year three of the hype: meaningful improvements in coding endurance or quality. None of them truly stand out—at least not yet.
1 reply →
I like them because the interface is consistent regardless of what editor/IDE I'm using. Also frequently I use it to do stuff like convert files, or look at a structure and then make a shell script to modify it in some way, in which case an IDE is just overhead, and the output is just something I would run in the terminal anyway.
Integration trades convenience for flexibility.
For me, a terminal environment means I can use any tool or tech, without it being compatible with the IDE. Editors, utilities, and runtimes can be chosen, and I'm responsible for ensuring they can interop.
IDEs being convenience by integrating all of that, so the choice is up to the user: A convenient self contained environment, vs a more custom self assembled one.
Choose your own adventure.
VS Code has the terminal(s) right there, I'm not missing out on any tool or tech
What I don't have to do is context switch between applications or interfaces
In other comments I relayed the sentiment that I enjoy not having to custom assemble a dev environment and spend way too much time making sure it works again after some plugin updates or neovim changes their APIs and breaks a bunch of my favorite plugins
Because integrating directly with very large varities of editors & environments is actually kind of hard? Everyone has their own favorite development environment, and by pulling the LLM agents into a separate area (i.e. a terminal app) then you can quickly get to "works in all environments". Additionally, this also implies "works with no dev environment at all". For example, vibe coding a simple HTML only webpage. All you need is terminal+browser.
All of the IDEs already have the AI integrations, so there's no work to do. It's not like you don't have to do the equivalent work for a TUI as an IDE for integration of a new model, it's the same config for that task.
> works with no dev environment at all
The terminal is a dev environment, my IDE has it built in. Copilot can read both the terminal and the files in my project, it even opens them and shows me the diff as it changes them. No need to switch context between where I normally code and some AI tool. These TUIs feel like the terminal version of the webapp, where I have to go back and forth between interfaces.
3 replies →
Not new to AI agents, either. I'm sure you can set up vim to be like an IDE, but unless you're coding over ssh, I don't know why it's preferable to an actual IDE (even one with vim bindings). GUIs are just better for many things.
If the optimal way to do a particular thing is a grid of rectangular characters with no mouse input, nothing prevents you having one of those in your GUI where it makes sense.
For instance, you can look up the documentation for which keys to press to build your project in your TUI IDE, or you can click the button that says "build" (and hover over the button to see which key to press next time). Why is typing :q<enter> better than clicking the "X" in the top-right corner? Obviously, the former works over ssh, but that's about it.
Slowness is an implementation detail. If MSVC6 can run fast enough on a computer from 1999 (including parsing C++) then we should be able to run things very fast today.
Ever heard of Emacs? WPE for terminals as a C/C++ IDE? Free Pascal?
Being some IDE for a terminal doesn't mean you can't have menues and everything must be driven with vi modal keys and commands.
Clicking X at the top right corner... Not exactly muscle memory. Way slower than ;q
1 reply →
I primarily code over ssh with VS Code Remote to cloud vm instances
It seems like you might have missed the gap between vi and modern terminal based development. Neovim with plugins is absolutely amazing and integrated, there are even options like Lazyvim that do all the work for you. I took the opposite journey and went from IDE to Neovim and I'm glad I did. vs code is a bunch of stuff badly cobbled together in a web app, running in Electron. It's a resource hog and it gets quite slow in big projects. Neovim had a much higher learning curve but is so much more powerful than vs code or even jetbrain stuff in my opinion and so much snappier too
> It seems like you might have missed the gap between vi and modern terminal based development.
No, I used neovim and spent way too much time trying to turn it into an IDE, even with the prepackaged setups out there
VS Code is sitting below 5% CPU and 1G of memory, not seeing the resource hog you are talking about. LSPs typically use more resources (which is outside and the same for both)
Neo(lazy)vim user here.. not sure what I'm missing from IDE...
Language server, check, Plugin ecosystem, check, Running tests on demand, check. lua sucks but that's an acceptable compromise as vimscript is worse.
I was neovim in the end, 100% agree lua is so much better than vimscript, but now I don't need either. I spend no time trying to match what an IDE can do in the terminal and get to spend that time building the things I'm actually interested in. I recalled Linus saying the reason he (at the time) used Fedora was because it just worked and he could spend his time on the kernel instead of tinkering to get linux working. This is one of the biggest reasons I stopped using (neo)vim
I had lots of problems with plugins in the ecosystem breaking, becoming incompatible with others, or often falling into unmaintained status. Integrations with external SaaS services are much better too
Also information density (and ease of access) as a peer comment has mentioned
1 reply →
Mice are good and the terminal doesn't make the best use of the information density possible on modern displays.
4 replies →
For me, the workflow that Claude Code provides via VSCode plugins or even IntelliJ integration is great. TUI for talking to the agent and then some mild GUI gloss around diffs and such.
I like terminal things because they are easy to use in context wherever I need them - whether that's in my shell locally or over SSH, or in the integrated terminal in whatever IDE I happen to be using.
I use vim if I need to make a quick edit to a file or two.
Idk, terminal just seems to mesh nicely into whatever else I'm doing, and lets me use the right tool for the job. Feels good to me.
My VS Code has a terminal and can remote into any machine and edit code / terminal there.
What I don't get is going back to terminal first approaches and why so many companies are putting these out (except that it is probably (1) easy to build (2) everyone is doing it hype cycle). It was similar when everyone was building ChatGPT functions or whatever before MCP came out. I expect the TUI cycle will fade as quickly as it rose
1 reply →
I like them because they're easier to launch multiple instances of and take fewer resources. Being able to fire agents off into tmux sessions to tackle small-fry issues that they can usually oneshot is a powerful tool to fight the decay of a codebase from high prio work constantly pushing out housekeeping.
I think it lets developer concentrate their energy on improving the agentic experience, which matters more right now. It's hard to keep up with all the models, which the developers have to write support code for. Once the products mature, I bet they'll go visual again.
I also use IDEs and I think people who use terminal-based editors are lunatics but I prefer terminal-based coding agents (I don't use them a lot to be fair).
It's easier to see the diff file by file and really control what the AI does IMO.
On another note VS Code is not an IDE, it's a text editor.
Copilot opens the files and shows me the diff, that is not missing in the IDE
Perhaps your definition of IDE is more restrictive. I see VS Code as my environment where I develop with masses of integrations
Terminal based editors can work as an IDE too, with diff and the like. EMacs it's like that and it has magic, ediff and who knows what. And VIM can do the same, of course.
> a real IDE (vs code)
Much better to use Neovim than a very clunky slow editor like VS Code or Jetbrains just to edit a text file.
The keyboard is far faster than clicking everywhere with the mouse.
This gets said a lot, but it's not like vscode doesn't have keyboard support.
1 reply →
I use the Vim plugin to keep my keyboard navigation, editor modes, and such... best of both worlds.
I think the sentiment that VS Code is clunky and slow is outdated. I have seen no noticeable impact since moving over from neovim
IDEs are slower, use more battery/ram/cpu and are ugly without adding any features.
it's the flexibility, no need for packaged extensions, just compose / pipe / etc..
Looks cool, but do we need another one of these?
based
RIP charm, I guess.
A little disappointed to see Charm hop on AI as well.
No, thanks. I prefer the old way. Books, some editor (I like both emacs or nvi), books with exercises, and maybe some autocomplete setup for function names/procedures, token words and the like.