Comment by razoorka

7 days ago

I did a several-month experiment using Claude as the only engineer on a real SaaS side project (Node.js/React, prod-quality, full feature ownership). My observations:

The quality of Claude’s output strongly correlates with how explicit and detailed the specs are. Markdown checklists, acceptance criteria, and clear task structure led to far fewer corrections and surprises.

Most mistakes were never “creative hallucinations” — just missed context or ambiguity in my own instructions.

The whole process felt less like delegation and more like tight collaboration. There’s less of a “flow state” and more context-switching to review/test, but review fatigue is manageable if you iterate on your instructions after each lesson.

No real learning from prior sessions, but persistent design docs and corrected example snippets in the prompts made a big difference.

Overall, a lot of the “vibe” factor gets smoothed out by rigorous process — not unlike mentoring a junior dev on a new domain.

For me, the speedup was very visible on well-scoped backend tasks and trivial CRUD/UI, and less so on broader, ambiguous work (designing APIs, coordinating many moving parts). The biggest upside: a clear process made both me and the tool better; the biggest “cost”: I spent a lot more time up front thinking through specs.

Not sure it fully scales (and I’m skeptical on agent “fleets”), but for a solo dev, given patience and a bit of PM discipline, it’s a net positive.

> I spent a lot more time up front thinking through specs.

Presumably you didn’t work like this before but now you are? Have you tried simply doing this when manually coding? It’s possible you’re getting a speed up from embracing good practices.