Comment by jacquesm

8 days ago

For those of you for who it is working: show your code, please.

I'll bite. Here's a 99.9% vibe-coded raw Git repository reader suitable for self-hosted or shared host environments:

https://repo.autonoma.ca/treetrek

There's still some work to do on the rendering side of model objects. Developing the syntax highlighting rules for 40 languages and file formats in about 10 minutes was amazing to see.

https://repo.autonoma.ca/repo/treetrek/tree/HEAD/render/rule...

  • Cool, thank you.

    Edit, great example. What is your long term maintenance strategy, do you keep the original prompts around so you can refine them later or do you dig into the source?

    Would love to see more of your workflow.

Here's one success I had -

https://github.com/sroerick/pakkun

It's git for ETL. I haven't looked at the code, but I've been using it pretty effectively for the last week or two. I wouldn't feel comfortable recommending it to anybody else, but it was basically one-shotted. I've been dogfooding it on a number of projects, had the LLM iterate on it a bit, and I'm generally very happy with the ergonomics.

  • That's a nice example, can you explain your 'one shot' setup in some more detail?

    • I don't have the prompt, but I used codex. I probably wrote a medium sized paragraph explaining the architecture. It scaffolded out the app, and I think I prompted it twice more with some very small bugfixes. That got me to an MVP which I used to build LaTeX pipelines. Since then, I've added a few features out as I've dogfooded it.

      It's a bit challenging / frustrating to get LLMs to build out a framework/library and the app that you're using the framework in at the same time. If it hits a bug in the framework, sometimes it will rewrite the app to match the bug rather than fixing the bug. It's kind of a context balancing act, and you have to have a pretty good idea of how you're looking to improve things as you dogfood. It can be done, it takes some juggling, though.

      I think LLMs are good at golang, and also good at that "lightweight utility function" class of software. If you keep things skeletal, I think you can avoid a lot of the slop feeling when you get stuck in a "MOVE THE BUTTON LEFT" loop.

      I also think that dogfooding is another big key. I coded up a calculator app for a dentist office which 2-3 people use about 25 times a day. Not a lot of moving parts, it's literally just a calculator. It could basically be an excel spreadsheet, except it's a lot better UX to have an app. It wouldn't have been software I'd have written myself, really, but in about 3 total hours of vibecoding, I've had two revisions.

      If you can get something to a minimal functional state without a lot of effort, and you can keep your dev/release loop extremely tight, and you use it every day, then over time you can iterate into something that's useful and good.

      Overall, I'm definitely faster with LLMs. I don't know if I'm that much faster. I was probably most fluent building web apps in Django, and I was pretty dang fast with that. LLMs are more about things like "How do you build tests to prevent function drift" and "How can I scaffold a feedback loop so that the LLM can debug itself".

      2 replies →

Why is this the attitude when it comes to AI? Can you imagine someone saying “please provide your code” when they claim that Rust sped up their work repo or typescript reduced errors in production?

  • Yes, I can absolutely imagine that.

    • Eh, sorry, I may have been too quick to judge, but in the past when I have shared examples of AI-generated code to skeptics, the conversation rapidly devolves into personal attacks on my ability as an engineer, etc.

      3 replies →