← Back to context

Comment by sysmax

6 days ago

There's a pretty good sweet spot in between vibe coding and manual coding.

You still think out all the classes, algorithms, complexities in your head, but then instead of writing code by hand, use short prompts like "encapsulate X and Y in a nested class + create a dictionary where key is A+B".

This saves a ton of repetitive manual work, while the results are pretty indistinguishable from doing all the legwork yourself.

I am building a list of examples with exact prompts and token counts here [0]. The list is far from being complete, but gives the overall idea.

[0] https://sysprogs.com/CodeVROOM/documentation/examples/scatte...

I'm still finding the right sweet spot personally. I would love to only think in architecture, features and interfaces and algorithms, and leave the class and function design fully to the LLM. At this point this almost works, but requires handholding and some retroactive cleanup. I still do it because it's necessary, but I grow increasingly tired of having to think to close to the code level, as I see it more and more as an implementation detail. (in before the code maximalists get in my case).

  • Most of the AI hiccups come from the sequential nature of generating responses. It gets to a spot where adhering to code structure means token X, and fulfilling some common sense requirement means token Y, so it picks X and the rest of the reply is screwed.

    You can get way better results with incremental refinement. Refine brief prompt into detailed description. Refine description into requirements. Refine requirements into specific steps. Refine steps into modified code.

    I am currently experimenting with several GUI options for this workflow. Feel free to reach out to me if you want to try it out.

Do you do anything special to get it follow your preferred style?

  • In my experience it will follow quite closely to the style of any seeded code. And you can also pass it a style guide!

  • I use hierarchical trees of guidelines (just markdown sections) that are attached to every prompt. It's somewhat wasteful in terms of token, but works well. If AI is not creating a new wrapper, it will just ignore the "instructions for creating new wrappers" section.