← Back to context

Comment by farfatched

3 days ago

A big loss for the Emacs community! emacs-aio is great!

I see the author is spring cleaning:

> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.

https://github.com/skeeto/dotfiles/commit/df275005769b654618...

> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.

https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...

LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.

> LLMs have inspired a similar change in me

FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.

LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.

Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.

  • Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.

    https://poyo.co/note/20260202T150723/

    • Interesting. Tnx.

      In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:

      > In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".

      But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?

      A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]

      [1] https://github.com/karthink/gptel/issues/1282

    • Is this your site? I cannot find an RSS feed for it. I'd like to subscribe.

  • Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.

  • So much this. Lisp can do things other languages have a hard time with. I think a resurgence is in order.

    • Can't agree more. Lisp was discovered/invented for the purpose of AI research. Of course, modern neural nets and transformers is a big departure from McCarthy's vision of AI - logical, interpretable, symbolic. However, if the current wave of AI hits a wall - and many serious researchers think it will, or already has at the margins - there's growing interest in neurosymbolic approaches that combine neural nets with symbolic reasoning. That's closer to McCarthy's original vision, and Lisps are genuinely well-suited for it.

      Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.

      Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:

      https://github.com/realgenekim/clj-surgeon

      https://clojure.getpando.ai

      https://github.com/yogthos/chiasmus

      2 replies →

  • Can you describe your setup on how you use LLMs within Emacs?

    • Of course.

      I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.

      I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.

      I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.

      All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².

      Is this helpful, or you want some more concrete examples?

      ¹ https://github.com/agzam/death-contraptions

      ² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai

  • Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.

    • > I don't think a lot of people understand what they're missing

      Very true. There's an enormous tacit knowledge gap. Check this out:

      I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...

      Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.

  • I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.

    I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.

I wonder what friction/maintenance he found with Tridactyl

For me the friction always comes when I try to use the internet without it

  • We're talking about https://addons.mozilla.org/en-US/firefox/addon/tridactyl-vim...?

    One example: it disables the default Ctrl-F search function but its own search function is subpar (no match counts/hlsearch, e.g.) and often clashes with website's built-in search (on Github, e.g.).

    It doesn't work on the default newtab either, and changing the default newtab somehow makes opening a new tab slower (that's FF's fault, I guess)…

    • You can type /phrase and then press ctrl-F for the full search bar. A more annoying problem is that some websites capture / presses, making it harder to initiate a page search. Then you have to shift-esc ctrl-f to search.

  • cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.

    solid extension, big fan

  • I'm not the author, but I recently gave up on Firefox, sadly.

    Since I needed to keep around a Chromium anyway, and I already am forced to use one for work, it became simpler to just use solely use a Chromium.

    In the process I dropped some extensions.

    It's been great.

    • To be honest I find the use of a separate browser at work a good way of forcing separation - all "work stuff" is done in one browser, and all "personal stuff" is done in a different one.

      This time around I'm using Chromium for personal stuff, and Firefox for work-stuff. I do more work-related browsing, so having the vertical tabs in firefox meant that was the better browser to use for official stuff.

      (In my previous job I used safari for work, and firefox for personal.)

    • I used Firefox for 20 years, loved it, defended it. But they just kept removing features that I was used to, and I ran into some bugs with popular websites and decided to hang it up. Currently on Brave and fully convinced it's the new Firefox.

I am running Ubuntu as my desktop operating system. I would never do this without an LLM to do the work of keeping it functional for me. Today, Rise of Nations wouldn't launch. Never had that problem before. Seems the driver for 32-bit games and my Nvidia GPU weren't getting along after an update. Codex was called in and solved the problem for me in about 5 minutes. I just copied and pasted the Steam log and let it tell me what to do. Tadah.

I'm actually excited about the potential for a future where local agents help improve the operating system experience as I go by making changes based on my use case. All local, of course. I do not want to trust a cloud provider with my use cases/behavior on my computer so they can sell me more ads...

Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.

  • If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).

    When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.

  • A heavily-customised setup is very comfortable.

    It's so comfortable that it acts as an impediment to change, since some types of change are uncomfortable.

    This can feel like friction to me.

    When I remove customisation, I am more "open to experience", and often find preferable tooling.

  • Arguably NixOS is the most config heavy platform but it solves the pain point of having to reconfigure on different systems. Especially in the LLM era where I can configure Emacs and my OS decoratively.

    • How do you nixify your Emacs configuration? I've looked into it but at the time the advice was to specify dependencies both in Nix and in .emacs.d, which seemed redundant to me. Is there something like callCabal2Nix for Emacs?

      Edit: Or do you mean "declaratively" in the sense of using something like straight.el?

  • > heavily-customized and terminal-heavy setup

    this exactly. most people can’t set it up that well.

[flagged]

  • "more flexible with adopting new tech" and "freeing myself of previous choices" are completely unrelated to what you just wrote.

  • Especially ridiculous because old-school bash CLI scripts is the only usable protocol for interacting with LLM agents.

  • Our lives are much more than our computing environments. By surrendering a bit of control of our computing environments we free up our brains to devote to other things in life: loved ones, pets, gardening, home maintenance, other hobbies and sports...

    Millions of happy Apple users can't be wrong on this.

    • Maybe, but for some of us, the peace of mind comes from stability and minimal friction with our tools.

      Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.