← Back to context

Comment by jacob019

20 hours ago

Seems everyone is working on the same things these days. I built a persistent Python REPL subprocess as an MCP tool for CC, it worked so insanely well that I decided to go all the way. I already had an agentic framework built around tool calling (agentlib), so I adapted it for this new paradigm and code agent was born.

The agent "boots up" inside the REPL. Here's the beginning of the system prompt:

  >>> help(assistant)

  You are an interactive coding assistant operating within a Python REPL.
  Your responses ARE Python code—no markdown blocks, no prose preamble.
  The code you write is executed directly.

  >>> how_this_works()

  1. You write Python code as your response
  2. The code executes in a persistent REPL environment
  3. Output is shown back to you IN YOUR NEXT TURN
  4. Call `respond(text)` ...

You get the idea. No need for custom file editing tools--Python has all that built in and Claude knows it perfectly. No JSON marshaling or schema overhead. Tools are just Python functions injected into the REPL, zero context bloat.

I also built a browser control plugin that puts Claude directly into the heart of a live browser session. It can inject element pickers so I can click around and show it what I'm talking about. It can render prototype code before committing to disk, killing the annoying build-fix loop. I can even SSH in from my phone and use TTS instead of typing, surprisingly great for frontend design work. Knocked out a website for my father-in-law's law firm (gresksingleton.com) in a few hours that would've taken 10X that a couple years ago, and it was super fun.

The big win: complexity. CC has been a disaster on my bookkeeping system, there's a threshold past which Claude loses the forest for the trees and makes the same mistakes over and over. Code agent pushes that bar out significantly. Claude can build new tools on the fly when it needs them. Gemini works great too (larger context).

Have fun out there! /end-rant

> there's a threshold past which Claude loses the forest for the trees and makes the same mistakes over and over.

Try using something like Beads with Claude Code. Also don't forget to have a .claude/instructions.md file, you can literally ask Claude to make it for you, this is the file Claude reads every time you make a new prompt. If Claude starts acting "off" you tell it to reread it again. With Beads though, I basically tell Claude to always use it when I pitch anything, and to always test that things build after it thinks its done, and to ask me to confirm before closing a task (They're called beads but Claude figures what I mean).

With Beads the key thing I do though once all that stuff is setup is I give it my ideas, it makes a simple ticket or tickets. Then I both braindump and / or ask it to do market research on each item and the parameters I want to be considered, and then to update the task accordingly. I then review them all. Then I can go "work on these in parallel" and it spins up as many agents as there are tasks, and goes to work. I don't always make it do the work in parallel if its a lot of tasks because Zed has frozen up on me, I read that Claude Code is fine its just the protocol that Zed uses that gets hosed up.

I find that with Beads, because I refine the tickets with Claude, it does way better at tasks this way. It's like if you're hand crafting the "perfect" prompt, but the AI model is writing it, so it will know exactly what you mean because it wrote it in its own verbiage.

  • Ohh, looks interesting, thanks for the tip!

    Git as db is clever, and the sqlite cache is nice. I'd been sketching sqlite based memory features myself. So much of the current ecosystem is suboptimal just because it's new. The models are trained around immutable conversation ledgers with user/tool/assistant blocks, but there are compelling reasons to manipulate both sides at runtime and add synthetic exchanges. Priming with a synthetic failure and recovery is often more effective than a larger, more explicit system message. Same with memory, we just haven't figured out what works best.

    For code agents specifically, I found myself wanting old style completion models without the structured turn training--doesn't exist for frontier models. Talking to Claude about its understanding of the input token stream was fascinating; it compared it to my visual cortex and said it'd be unreasonable to ask me to comment on raw optic nerve data.

    "Tell it to reread it again"-exactly the problem. My bookkeeping/decision engine has so much documentation I'm running out of context, and every bit is necessary due to interconnected dependencies. Telling it to reread content already in the window feels wrong, that's when I refine the docs or adjust the framework. I've also found myself adding way more docstrings and inline docs than I'd ever write for myself. I prefer minimal, self-documenting code, so it's a learning process.

    There's definitely an art to prompts, and it varies wildly by model, completely different edge cases across the leading ones. Thanks again for the tip; I suspect we'll see a lot of interesting memory developments this year.

This sounds really cool! I love the idea behind it, the agent having persistent access to a repl session, as I like repl-based workflows in general. Do you have any code public from this by any chance?