Comment by threethirtytwo
13 days ago
I think it’s usage patterns. It is you in a sense.
You can’t deny the fact that someone like Ryan dhal creator of nodejs declared that he no longer writes code is objectively contrary to your own experience. Something is different.
I think you and other deniers try one prompt and then they see the issues and stop.
Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want. The first output is almost always not what you want. It is the feedback loop between you and the AI that cohesively creates something better than each individual aspect of the human-AI partnership.
> Programming with AI is like tutoring a child. You teach the child, tell it where it made mistakes and you keep iterating and monitoring the child until it makes what you want.
Who are you people who spend so much time writing code that this is a significant productivity boost?
I'm imagining doing this with an actual child and how long it would take for me to get a real return on investment at my job. Nevermind that the limited amount of time I get to spend writing code is probably the highlight of my job and I'd be effectively replacing that with more code reviews.
Here's an example:
I recently inherited an over decade old web project full of EOL'd libraries and OS packages that desperately needed to be modernized.
Within 3 hours I had a working test suite with 80% code coverage on core business functionality (~300 tests). Now - maybe the tests aren't the best designs given there is no way I could review that many tests in 3 hours, but I know empirically that they cover a majority of the code of the core logic. We can now incrementally upgrade the project and have at least some kind of basic check along the way.
There's no way I could have pieced together as large of a working test suite using tech of that era in even double that time.
> maybe the tests aren't the best designs given there is no way I could review that many tests in 3 hours,
If you haven't reviewed and signed off then you have to assume that the stuff is garbage.
This is the crux of using AI to create anything and it has been a core rule of development for many years that you don't use wizards unless you understand what they are doing.
2 replies →
I code firmware for a heavily regulated medical device (where mistakes mean life and death), and I try to have AI write unit tests for me all the time, and I would say I spend about 3 days correcting and polishing what the AI gives me in 30 minutes. The first pass the AI gives me, likely saves a day of work, but you would have to be crazy to trust it blindly. I guarantee it is not giving you what you think it is or what you need. And writing the tests is when I usually find and fix issues in the code. If AI is writing tests that all pass without updating the code then it's likely falsely telling you the code is perfect when it isn't.
1 reply →
You know they cause a majority of the code of the core logic to execute, right? Are you sure the tests actually check that those bits of logic are doing the right thing? I've had Claude et al. write me plenty of tests that exercise things and then explicitly swallow errors and pass.
1 reply →
... Yeah thise tests are probably garbage. The models probably covered the 80% that consists of boiler plate and mocked out the important 20% that was critical business logic. That's how it was in my experience.
For God's sake that's completely slop.
1 reply →
it's not just writing code.
And maybe child is too simplistic of an analogy. It's more like working with a savant.
The type of thing you can tell AI to do is like this: You tell it to code a website... it does it, but you don't like the pattern.
Say, "use functional programming", "use camel-case" don't use this pattern, don't use that. And then it does it. You can leave it in the agent file and those instructions become burned into it forever.
A better way to put it is with this example: I put my symptoms into ChatGPT and it gives some generic info with a massive "not-medical-advice" boilerplate and refuses to give specific recommendations. My wife (an NP) puts in anonymous medical questions and gets highly specific med terminology heavy guidance.
That's all to say the learning curve with LLMs is how to say things a specific way to reliability get an outcome.
These people are just the same charlatans and scammers you saw in the web3 sphere. Invoking Ryan Dahl as some sort of authority figure and not a tragic figure that sold his soul to VC companies is even more pathetic.
Don't appreciate this comment. Calling me a charlatan is rude. He's not authority, but he has more credibility than you and most people on HN.
There is obvious division of ideas here. But calling one side stupid or referring to them as charlatans is outright wrong and biased.
1 reply →
My personal suspicion is that the detractors value process and implementation details much more highly than results. That would not surprise me if you come from a business that is paid for its labor inputs and is focused on keeping a large team billable for as long as possible. But I think hackers and garage coders see the value of “vibing” as they are more likely to be the type of people who just want results and view all effort as margin erosion rather than the goal unto itself.
The only thing I would change about what you said is, I don’t see it as a child that needs tutoring. It feels like I’m outsourcing development to an offshore consultancy where we have no common understanding, except the literal meaning of words. I find that there are very, very many problems that are suited well enough to this arrangement.
> But I think hackers and garage coders see the value of “vibing”
That's a massive generalization.
My 2c: there is a divide, unacknowledged, between developers that care about "code correctness" (or any other quality/science/whatever adjective you like) and those who care about the whole system they are creating.
I care about making stuff. "Making stuff" means stuff that I can use. I care about code quality yes, but not to an obsessive degree of "I hate my framework's ORM because of <obscure reason nobody cares about>". So, vibe coding is great, because I know enough to guide the agent away from issues or describe how I want the code to look or be changed.
This gets me to my desired effect of "making stuff" much faster, which is why I like it.
My other 2c: There are Engineers who are concerned by the long-term consequences of their work e.g. maintainability.
In real engineering disciplines, the Engineer is accountable for their work. If a bridge you signed off collapses, you're accountable and if it turns out you were negligent you'll face jail time. In Software, that might be a program in a car.
The Engineering mindset embodies these principles regardless of regulatory constraints. The Engineer needs to keep in mind those who'll be using their constructions. With Agentic Vibecoding, I can never get confident that the resulting software will behave according to specs. I'm worried that it'll scewover the user, the client, and all stakeholders. I can't accept half-assed work just because it saved me 2 days of typing.
I don't make stuff just for the sake of making stuff otherwise it would just be a hobby, and in my hobbies I don't need to care about anything, but I can't in good conscience push shit and slop down other people's throats.
2 replies →
In real Engineering disciplines the process is important, and is critical for achieving desired results, that's why there are manuals and guidelines measured in the hundreds of pages for things like driving a pile into dirt. There are rigorous testing procedures to enusre everything is correct and up to spec, because there are real consequences.
Software Developers have long been completely disconnected from the consequences of their work, and tech companies have diluted responsibility so much that working software doesn't matter anymore. This field is now mostly scams and bullshit, where developers are closer to finance bros than real, actual Engineers.
I'm not talking about what someone os building in their home for personal reasons for their own usage, but about giving the same thing to other people.
In the end it's just cost cutting.