← Back to context

Comment by ako

5 hours ago

I think i have enough control, probably more than when working with developers. Here's something i recently had claude code build: https://github.com/ako/backing-tracks

If you check the commit log, you'll see small increments. The architecture document is what i have it generate to validate the created architecture: https://github.com/ako/backing-tracks/blob/main/docs/ARCHITE...

Other than that most changes start with the ai generating a proposal document that i will review and improve, and then have it built. I think this was the starting proposal: https://github.com/ako/backing-tracks/blob/main/docs/DSL_PRO...

This started as a conversation in Claude Desktop, which it then summarized into this proposal. This i copied into claude code, to have it implemented.

> I think i have enough control.

This is probably just a disagreement about the term "control", so we can agree to disagree on that one i suppose.

The rest of the reply doesn't really relate to any of the points i mentioned.

That it's possible to successfully use the tool to achieve your goals wasn't in dispute.

I'll try to narrow it down:

---

> You are not a victim at the mercy of your LLM.

Yes, you absolutely are, it's how they work.

As i said, you can suggest guidelines and directions but it's not guaranteed they'll be adhered to.

To be clear , this also applies to people as well.

---

Directing an LLM (or LLM based orchestration system) is not the same as directing a team of people.

The "interface" is similar in that you provide instructions and guidelines and receive an attempt at the wanted outcome.

However, the underlying mechanisms of how they work are so different that the analogy you were trying to use doesn't make sense.

---

Again, LLM's can be useful tools, but presenting them as something they aren't only serves to muddy the waters of understanding how best to use them.

---

As an aside, IMO, the sketchy salesmen approach to over-promising on features and obscuring the the limitations will do great harm to the adoption of LLM's in the medium to long term.

The misrepresentation of terminology is also contributing to this.

The term AI is intentionally being used to attribute a level of reasoning and problem solving capability beyond what actually exists in these systems.

  • Looks like we just have different expectations: i don't want to micromanage my coding agents any more than i micromanage the developers i work with as a product manager. If the output does what it is supposed to do, and the software is maintainable and extendable by following certain best practices, i'm happy. And i expect that goes for most business people.

    And in practice i have more control with a coding agent than with developers as i can iterate over ideas quickly: "build this idea", "no change this", "remove this and replace it with this". Within an hour you can quickly iterate an idea into something that works well. With developers this would have taken days if not more. And they would've complained i need to better prepare my requirements.