Comment by vanillameow
16 days ago
Sometimes when I use a plugin like this I get reminded just how much of a productivity nerf it is to code without an autocomplete AI. Honestly in my opinion if you write a lot of boilerplate code this is almost more useful than something like Claude Code, because it turbocharges your own train of thought rather than making you review someone else's, which may not align with your vision.
This is a really good plugin. I'm a diehard JetBrains user, I tried switching to VSCode and its various forks many times because of AI but muscle memory from years of use is hard to override. And for a lot of languages JetBrains is just much better, especially out of the box. But they dropped the ball so hard on AI it's unbelievable. Claude Code pulled it back a bit because at least now the cutting edge tools aren't just VSCode plugins, but I was still missing a solid autocomplete tool. Glad this is here to fill that niche. Very likely will be switching my GitHub copilot subscription to this.
I also really appreciate publishing open weights and allowing a privacy mode for anonymous trial users, even if it's opt-in. Usually these things seem to be reserved for paying tiers these days...
I have always said this and had people on HN reply that they don't get much use out of autocomplete, which puzzled me.
I'm starting to understand that there are two cultures.
Developers who are mostly writing new code get the most benefit from autocomplete and comparatively less from Claude Code. CC is neat but when it attempts to create something from nothing the code is often low quality and needs substantial work. It's kind of like playing a slot machine. Autocomplete, on the other hand, allows a developer to write the code they were going to write, but faster. It's always a productivity improvement.
Developers who are mostly doing maintenance experience the opposite. If your workflow is mostly based around an issue tracker rather than figma, CC is incredible, autocomplete less so.
I personally find autocomplete to be detrimental to my workflow so I disagree that it is a universal productivity improvement.
I’m in the “write new stuff with cc and get great code.” Of course I’ll be told I don’t really know what I’m doing. That I just don’t know the difference between good and bad quality code. Sigh.
The best code is no code.[0]
The main "issue" I have with Claude is that it is not good at noticing when code can be simplified with an abstraction. It will keep piling on lines until the file is 3000 lines long. You have to intervene and suggest abstractions and refactorings. I'm not saying that this is a bad thing. I don't want Claude refactoring my code (GPT-5 does this and it's very annoying). Claude is a junior developer that thinks it's a junior. GPT-5 is a junior developer that thinks it's a senior.
[0]: https://www.folklore.org/Negative_2000_Lines_Of_Code.html
1 reply →
I agree with you. I do have some local model LLM support configured in Emacs, as well as integration with gemini 3 flash. However, all LLM support is turned off in my setup unless I specifically enable it for a few minutes.
I will definitely try the 1.5B model but I usually use LLMs by taking the time to edit a large one-shot prompt and feed it to either one of the new 8B or 30B local models or to gemini 3 flash via the app, web interface, or API.
Small purpose-built models are largely under-appreciated. I believe that it is too easy to fall into the trap of defaulting to the strongest models and to over rely on them. Shameless plug: it is still incomplete, but I have released an early version on my book ‘Winning Big With Small AI’ - so, I admit my opinions are a little biased!
do you know a good way to give context files in emacs? currently i have to either do ctrl-x+h to select file content of individual files to give to ai or copy files themselves from ai's chat interface. I would much prefer selecting all files at once and get their content copied to clipboard.
Yep. I'm coming to resent Claude Code and tools like it for taking me out of direct contact with the code.
I think we're still in the early days of these systems. The models could be capable of a lot more than this "chat log" methodology.
Agree about JetBrains dropping the ball. Saddens me because I've also been a diehard user of their products since 2004.
Glad to hear I'm not alone, the latest releases of JetBrains have been so bad I finally cancelled my subscription. VSCode has been a nice surprise, "its giving old emacs" as the kids would say.
I am curious about how both of you think Jetbrains is dropping the ball so much that you are no longer buying the tool.
You are still using it but no longer getting updates?
5 replies →
Hopefully not too offtopic: why so much boilerplate?
I see most would-be-boilerplate code refactored so the redundant bit becomes a small utility or library. But most of what I write is for research/analysis pipelines, so I'm likely missing an important insight. Like more verbose configuration over terse convention?
For code structure, snippets tempting[1] ("iff[tab]" => "if(...){...}") handles the bare conditional/loop completes in a more predictable way and offline/without a LLM eating into RAM.
[1] https://github.com/joaotavora/yasnippet; https://github.com/SirVer/ultisnips; https://code.visualstudio.com/docs/editing/userdefinedsnippe...
Abstracting away redundancy could make it harder to understand exactly what the code is doing, and could introduce tech debt when you need slightly different behavior from some code that is abstracted away. Also, if the boilerplate code is configuration, its good to see exactly what the configuration is when trying to grok how some code works.
You bring up a good point with snippets though, and I wonder if that would be good information to feed into the LLM for autocomplete. That snippet is helpful if you want to write on condition at a time, but say you have a dozen conditions if statements to write with that snippet. After writing one, the LLM could generate a suggestion for the other 11 conditions using that same snippet, while also taking into consideration the different types of values and what you might be checking against.
As for RAM/processing, you're not wrong there, but with specialized models, specialized hardware, and improvements in model design, the number of people working under such restricted environments where they are concerned about resource use will decrease over time, and the utility of these tools will increase. Sure a lower-tech solution works just fine, and it'll continue to work fine, but at some point the higher-tech solution will have similar levels of friction and resource use for much better utility.
Junie is irredeemable but if it's autocomplete that you are unhappy about, IntelliJ has both local- and cloud autocomplete now.
It is depressing that our collective solution to the problem of excess boilerplate keeps moving towards auto-generation of it.