← Back to context

Comment by leptons

4 hours ago

That could explain the glut of AI hype on HN. Some people think it's magic, when it's just creating a lot of barely-functional slop. If they actually looked at the code it creates, they probably wouldn't be shouting about it from the rooftops. It almost seems like AI has its own "reality distortion field".

I often give the AI a task to produce some code for a specific thing. Then I also code to solve the same problem in parallel with the AI. My solution is always 1/4 the code, and is likely far easier for another real human to read through.

I also either match or beat the AI in speed, Claude seems to take forever sometimes. With all the coddling and revisions I have to do with the AI, I'm usually done before the AI is. It takes a non-negligible amount of time to think through and write down instructions so the AI can make a try at not fucking it up - and that's time I could have used for coding a straight-forward solution that I already knew how to produce without needing to write down step-by-step instructions.

In my experience, it's definitely faster to do manually if it's something that you know well. What LLMs enable is to skip research and learning by producing usable code immediately.

  • There is a long way between "usable code" and "the code I actually want". And each change I ask for piles on the slop. I don't get the slop when I just spend the same amount of time to write it out myself.

    Most of what I find AI useful for is analyzing large volumes of data and summarizing, like looking in log files for a problem, or compiling reports from tons of JSON data. But even for those use cases, a simple CTRL-F is way way faster.