← Back to context

Comment by senko

6 days ago

Switching between the two parallel agents (frontend & backend, same project), requiring context switches.

I'm speccing out the task in detail for one agent, then reviewing code for the previous task on the other agent and testing the implementation, then speccing the next part for that one (or asking for fixes/tweaks), then back to the first agent.

They're way faster in producing code than I am in reviewing and spelling out in details what I want, meaning I always have the other one ready.

When doing everyting myself, there are periods where I need to think hard and periods where it's pretty straightforward and easy (typing out the stuff I envisioned, boilerplate, etc).

With two agents, I constantly need to be on full alert and totally focused (but switching contexts every few minutes), which is way more tiring for me.

With just one agent, the pauses in the workflow (while I'm waiting for it to finish) are long enough to get distracted but short enough to not being able to do anything else (mostly).

Still figuring out the sweet spot for me personally.

I've been meaning to try out some text-to-speech to see if that makes it a bit easier. Part of the difficulty of "spelling out in detail what I want" is the need for precise written language, which is high cognitive load, which makes the context switching difficult.

Been wondering if just natural speaking could both speed up typing. Maybe have an embedded transform/compaction that strips out all the ummms and gets to the point of what you were trying to say. Might have lower cognitive load, which could make it easier.

  • This works really well already. You can fire up something like Wispr Flow and dump what you're saying directly into Claude Code or similar, it will ignore the ums and usually figure out what you mean.

    I use ChatGPT voice mode in their iPhone app for this. I walk the dog for an hour and have a loose conversation with ChatGPT through my AirPods, then at the end I tell it to turn everything we discussed into a spec I can paste into Claude Code.