← Back to context

Comment by voxelghost

8 days ago

I don't have LLM/AI write or generate any code or document for me. Partly because the quality is not good enough, and partly I worry about copyright/licensing/academic rigor, partly because I worry about losing my own edge.

But I do use LLM/AI, as a rubber duck that talks back, as a google on steroids - but one who needs his work double checked. And as domain discovery tool when quickly trying to get a grasp of a new area.

Its just another tool in the toolbox for me. But the toolbox is like a box of chocolates - you never know what you are going to get.

In the new world that's emerging, you are losing your edge by not learning how to master and leverage AI agents. Quality not good enough? Instruct them in how you want them to code, and make sure a sufficient quantity of the codebase is loaded into their context so they can see examples of what you consider good enough.

  • >Instruct them in how you want them to code

    They don't always listen.

    Writing SQL, I'll give ChatGPT the schema for 5 different tables. It habitually generates solutions with columns that don't exist. So, naturally, I append, "By the way, TableA has no column FieldB." Then it just imagines a different one. Or, I'll say, "Do not generate a solution with any table-col pair not provided above." It doesn't listen to that at all.

    • This is something that people working on extremely simple apps don’t understand because for their purposes it looks like magic.

      If you know what you’re doing and you’re trying to achieve something other than the same tutorials that have been pasted all over the internet the non-deterministic pattern machine is going to generate plausible bs.

      They’ll tell you any number of things that you’re supposedly doing wrong without understanding what the machine is actually doing under the hood.