← Back to context

Comment by iLemming

3 days ago

> LLMs have inspired a similar change in me

FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.

LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.

Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.

Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.

https://poyo.co/note/20260202T150723/

  • Interesting. Tnx.

    In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:

    > In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".

    But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?

    A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]

    [1] https://github.com/karthink/gptel/issues/1282

  • Is this your site? I cannot find an RSS feed for it. I'd like to subscribe.

Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.

So much this. Lisp can do things other languages have a hard time with. I think a resurgence is in order.

  • Can't agree more. Lisp was discovered/invented for the purpose of AI research. Of course, modern neural nets and transformers is a big departure from McCarthy's vision of AI - logical, interpretable, symbolic. However, if the current wave of AI hits a wall - and many serious researchers think it will, or already has at the margins - there's growing interest in neurosymbolic approaches that combine neural nets with symbolic reasoning. That's closer to McCarthy's original vision, and Lisps are genuinely well-suited for it.

    Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.

    Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:

    https://github.com/realgenekim/clj-surgeon

    https://clojure.getpando.ai

    https://github.com/yogthos/chiasmus

    • Clojure? Forget it, SBCL would be better for that task. Just look what could be done with Coalton.

    • Well, this is because "normal" programming languages are one step above AST. So LLM has to work with program text, which is much easier than regular human text, as it is constrained to well defined number of keywords and grammar, but still this is pretty variable. Lisp is just AST, so it is one level lower. I guess that at some point LLM-s will stop writing human-readable code, as this is additional obstacle, they will work directly with binaries or virtual machines code (like in Java), because this will be easier and eat less tokens.

Can you describe your setup on how you use LLMs within Emacs?

  • Of course.

    I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.

    I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.

    I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.

    All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².

    Is this helpful, or you want some more concrete examples?

    ¹ https://github.com/agzam/death-contraptions

    ² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai

Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.

  • > I don't think a lot of people understand what they're missing

    Very true. There's an enormous tacit knowledge gap. Check this out:

    I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...

    Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.

I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.

I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.

> LLM that I run inside Emacs can fully control the active Emacs instance ... > you can easily extract text from anything

This is what gives me the most pause.

  • Care to explain? Why is it? You think it's dangerous or some other reasons?

    • It's definitely dangerous.

      Do you have credentials anywhere within reach of that session? Can you open your bank account in a browser ... within reach of that session? Are your contacts available within reach of that session? What about personal notes/emails/goals or other sensitive information? That people think these can't be added together in one very socially/monetarily destructive fell swoop is ... telling.

      Ignoring obvious bad-actor concerns from just giving root to your whole life to an LLM running on someone else's server, LLMs themselves can act in ways that are extremely counterproductive to their organization/host/etc.

      A quote/warning I learned in the late 90s is just as relevant today, "Computers make very fast, very accurate mistakes."