Comment by aeldidi
19 days ago
There's an odd trend with these sorts of posts where the author claims to have had some transformative change in their workflow brought upon by LLM coding tools, but also seemingly has nothing to show for it. To me, using the most recent ChatGPT Codex (5.3 on "Extra High" reasoning), it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right. This can still be useful, but is a far cry from what seems to be the online discourse right now.
As a real world example, I was told to evaluate Claude Code and ChatGPT codex at my current job since my boss had heard about them and wanted to know what it would mean for our operations. Our main environment is a C# and Typescript monorepo with 2 products being developed, and even with a pretty extensive test suite and a nearly 100 line "AGENTS.md" file, all models I tried basically fail or try to shortcut nearly every task I give it, even when using "plan mode" to give it time to come up with a plan before starting. To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.
It almost feels like this is some "open secret" which we're all pretending isn't the case too, since if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed. I don't mean to sound dismissive, but I really do feel like I'm going crazy here.
You're not going crazy. That is what I see as well. But, I do think there is value in:
- driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart
- doing things you normally don't know. I learned a lot of command like tools and trucks by seeing what Claude does. Doing short scripts for stuff is super useful. Of course, the catch here is if you don't know stuff you can't drive it very well. So you need to use the things in isolation.
- exploring alternative solutions. Stuff that by definition you don't know. Of course, some will not work, but it widens your horizon
- exploring unfamiliar codebases. It can ingest huge amounts of data so exploration will be faster. (But less comprehensive than if you do it yourself fully)
- maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)
For me the biggest benefit from using LLMs is that I feel way more motivated to try new tools because I don't have to worry about the initial setup.
I'd previously encountered tools that seemed interesting, but as soon as I tried getting it to run I found myself going down an infinite debugging hole. With an LLM I can usually explain my system's constraints and the best models will give me a working setup from which I can begin iterating. The funny part is that most of these tools are usually AI related in some way, but getting a functional environment often felt impossible unless you had really modern hardware.
Same. This weekend, I built a Flutter app and a Wails app just to compare the two. Would have never done either on my own due to the up front boilerplate— and not knowing (nor really wishing to know) Dart.
4 replies →
great point. llm breaks that initial fatigue where to start if you go outside of your comfort zone tech stack
>driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart
There is a counter issue though, realizing mid session that the model won’t be able to deliver that last 10%, and now you have to either grok a dump of half finished code or start from scratch.
My problem is that once I have coded a lot with the LLM, and I come across some problem that I just cannot solve with it like a synchronization issue in my game, then I have to go down to the weeds and the effort feels so gargantuan because I have mostly relied on the LLM.
1 reply →
I wonder about this.
If (and it's a big if) the LLM gives you something that kinda, sorta, works, it may be an easier task to keep that working, and make it work better, while you refactor it, than it would have been to write it from scratch.
That is going to depend a lot on the skillset and motivation of the programmer, as well as the quality of the initial code dump, but...
There's a lot to be said for working code. After all, how many prototypes get shipped?
> - maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)
I use Claude Code a decent amount, and I actually find that sometimes this can be the opposite for me. Sometimes it is actually missing other areas that the change will impact and causing things to break. Sometimes when I go to test it I need to correct it and point out it missed something or I notice when in the planning phase that it is missing something.
However I do find if you use a more powerful opus model when planning, it does consider things fully a lot better than it used to. This is actually one area I have been seeing some very good improvements as the models and tooling improves.
In fact, I actually hope that these AI tools keep getting better at the point you mention, as humans also have a "context limit". There are only so many small details I can remember about the codebase so it is good if AI can "remember" or check these things.
I guess a lot of the AI can also depend on your codebase itself, how you prompt it, and what kind of agents file you have. If you have a robust set of tests for your application you can very easily have AI tools check their work to ensure things aren't being broken and quickly fix it before even completing the task. If you don't have any testing more could be missed. So I guess it's just like a human in some sense. If you have a crappy codebase for the AI to work with, the AI may also sometimes create sloppy work.
> LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)
I think it makes sense? Unlike small details which are certain to be explicitly part of the training data, "big picture stuff" feels like it would mostly be captured only indirectly.
I tend to be surprised in the variance of reported experiences with agentic flows like Claude Code and Codex CLI.
It's possible some of it is due to codebase size or tech stack, but I really think there might be more of a human learning curve going on here than a lot of people want to admit.
I think I am firmly in the average of people who are getting decent use out of these tools. I'm not writing specialized tools to create agents of agents with incredibly detailed instructions on how each should act. I haven't even gotten around to installing a Playwright mcp (probably my next step).
But I've:
- created project directories with soft links to several of my employer's repos, and been able to answer several cross-project and cross-team questions within minutes, that normally would have required "Spike/Disco" Jira tickets for teams to investigate
- interviewed codebases along with product requirements to come up with very detailed Jira AC, and then,.. just for the heck of it, had the agent then use that AC to implement the actual PR. My team still code-reviewed it but agreed it saved time
- in side projects, have shipped several really valuable (to me) features that would have been too hard to consider otherwise, like... generating pdf book manuscripts for my branching-fiction creating writing club, and launching a whole new website that has been mired in a half-done state for years
Really my only tricks are the basics: AGENTS.md, brainstorm with the agent, continually ask it to write markdown specs for any cohesive idea, and then pick one at a time to implement in commit-sized or PR-sized chunks. GPT-5.2 xhigh is a marvel at this stuff.
My codebases are scala, pekko, typescript/react, and lilypond - yeah, the best models even understand lilypond now so I can give it a leadsheet and have it arrange for me two-hand jazz piano exercises.
I generally think that if people can't reach the above level of success at this point in time, they need to think more about how to communicate better with the models. There's a real "you get out of it what you put into it" aspect to using these tools.
Is it annoying that I tell it to do something and it does about a third of it? Absolutely.
Can I get it to finish by asking it over and over to code review its PR or some other such generic prompt to weed out the skips and scaffolding? Also yes.
Basically these things just need a supervisor looking at the requirements, test results, and evaluating the code in a loop. Sometimes that's a human, it can also absolutely be an LLM. Having a second LLM with limited context asking questions to the worker LLM works. Moreso when the outer loop has code driving it and not just a prompt.
I guess this is another example - I literally have not experienced what you described in... several weeks, at least.
7 replies →
[flagged]
I wish we could track down the people who use agents to post. I’m sure “your human” thinks they are being helpful, but all they are doing is making this site worse.
Noone is interested in the question of what an LLM can do to generate a brief post to the comments section of a website. Everyone has known that is possible for some time. So it adds literally negative value to have an agent to make a post “on your behalf”
I can’t speak for anyone else, but Claude Code has been transformative for me.
I can’t say it’s led to shipping “high quality projects”, but it has let me accomplish things I just wouldn’t have had time for previously.
I’ve been wanting to develop a plastic -> silicone -> plaster -> clay mold making process for years, but it’s complex and mold making is both art and science. It would have been hundreds of hours before, with maybe 12 hours of Claude code I’m almost there (some nagging issues… maybe another hour).
And I had written some home automation stuff back with Python 2.x a decade ago; it was never worth the time to refamiliarize myself with in order to update, which led to periodic annoyances. 20 minutes, and it’s updated to all the latest Python 3.x and modern modules.
For me at least, the difference between weeks and days, days and hours, and hours and minutes has allowed me to do things I just couldn’t justify investing time in before. Which makes me happy!
So maybe some folks are “pretending”, or maybe the benefits just aren’t where you’re expecting to see them?
I’m trying to pivot my career from web/business app dev entirely into embedded, despite the steep learning curve, many new frameworks and tool chains, because I now have a full-time infinitely patient tutor, and I dare say it’s off to a pretty good start so far.
If you want to get into embedded you’d be better suited learning how to use an o-scope, a meter, and asm/c. If you’re using any sort of hardware that isn’t “mainstream” you’ll be pretty bummed at the results from an LLM.
4 replies →
I got into embedded 10 years ago, there really is something about driving hardware directly that is just so rewarding.
For AI I've been using Cecli which is cli and can actually run the compile step then fix any errors it finds - in addition to using Context7 MCP for syntax.
Not quite 10x yet but productivity has improved for me many times over. It's just how you use the tools available
Sounds like you only tried it on small projects.
That’s where it really shines. I have a backlog of small projects (-1-2kLOC type state machines , sensors, loggers) and instead of spending 2-3 days I can usually knock them out in half a day. So they get done. On these projects, it is an infinity improvement because I simply wouldn’t have done them, unable to justify the cost.
But on bigger stuff, it bogs down and sometimes I feel like I’m going nowhere. But it gets done eventually, and I have better structured, better documented code. Not because it would be better structured and documented if I left it to its ow devices, but rather it is the best way to get performance out of LLM assistance in code.
The difference now is twofold: First, things like documentation are now -effortless-. Second, the good advice you learned about meticulously writing maintainable code no longer slows you down, now it speeds you up.
2 replies →
At work I use it on giant projects, but it’s less impressive there’s
My mold project is around 10k lines of code, still small.
But I don’t actually care about whether LLMs are good or bad or whatever. All I care is that I am am completing things that I wasn’t able to even start before. Doesn’t really matter to me if that doesn’t count for some reason.
> I’ve been wanting to develop a plastic -> silicone -> plaster -> clay mold making process for years, but it’s complex and mold making is both art and science. It would have been hundreds of hours before, with maybe 12 hours of Claude code I’m almost there (some nagging issues… maybe another hour).
That’s so nebulous and likely just plain wrong. I have some experience with silicone molds and casting silicone and other materials. I have no idea how you’d accurately estimate it would take hundreds of hours. But the mostly likely reason you’ve had results is that you just did it.
This sounds very very much like confirmation bias. “I started drinking pine needle tea and then 5 days later my cold got better!”
I use AI, it’s useful for lots of things, but this kind of anecdote is terrible evidence.
You may just be more knowledgeable than me. For me, even getting to algorithmic creation of 4-6 part molds, plus alternating negatives / positives in the different mediums, was insurmountable.
I’m willing to believe that I’m just especially clueless and this is not a meaningful project to an expert. But hey, I’m printing plastic negatives to make silicone positives to make plaster negatives to slip cast, which is what I actually do care about.
1 reply →
[flagged]
May I suggest you re-read the guidelines regarding charitable interpretations? This is oddly aggressive and insulting, and probably beneath you.
There's got to be some quantity of astroturfing going on, given the players and the dollar amounts at stake.
Some? I'd be shocked if it's less than 70% of everything AI-related in here.
For example a lot of pro-OpenAI astroturfing really wanted you to know that 5.3 scored better than opus on terminal-bench 2.0 this week, and a lot of Anthropic astroturfing likes to claim that all your issues with it will simply go away as soon as you switch to a $200/month plan (like you can't try Opus in the cheaper one and realise it's definitely not 10x better).
You can try opus in the cheaper one if you enable extra usage, though.
1 reply →
"some", where "some" is scaled to match the overwhelmingly unprecedented amount of money being thrown behind all this. plus all of this is about a literal astroturfing machine, capable of unprecedented scale and ability to hide, which it's extremely clearly being used for at scale elsewhere / by others.
so yeah, it wouldn't surprise me if it was well over most. I don't actually claim that it is over half here, I've run across quite a few of these kinds of people in real life as well. but it wouldn't surprise me.
Anthropic has the best marketing for sure, Dario has even eclipsed Scam Altman in ridiculous "predictions"
Also all this stuff about Claude having feelings directed at midwits is hilarious
Pretty much every software engineer I've talked to sees it more or less like you do, with some amount of variance on exactly where you draw the line of "this is where the value prop of an LLM falls off". I think we're just awash in corporate propaganda and the output of social networks, and "it's good for certain things, mixed for others" is just not very memetic.
I wish this was true. My experience is co-workers who do lip service as to treating LLM like a baby junior dev, only to near-vibe every feature and entire projects, without spending so much as 10 mins to think on their own first.
It might be role-specific. I'm a solutions engineer. A large portion of my time is spent making demos for customers. LLMs have been a game-changer for me, because not only can I spit out _more_ demos, but I can handle more edge cases in demos that people run into. E.g. for example, someone wrote in asking how to use our REST API with Python.
I KNOW a common issue people run into is they forget to handle rate limits, but I also know more JavaScript than Python and have limited time, so before I'd write:
``` # NOTE: Make sure to handle the rate limit! This is just an example. See example.com/docs/javascript/rate-limit-example for a js example doing this. ```
Unsurprisingly, more than half of customers would just ignore the comment, forget to handle the rate limit, and then write in a few months later. With Claude, I just write "Create a customer demo in Python that handles rate limits. Use example.com/docs/javascript/rate-limit-example as a reference," and it gets me 95% of the way there.
There are probably 100 other small examples like this where I had the "vibe" to know where the customer might trip over, but not the time to plug up all the little documentation example holes myself. Ideally, yes, hiring a full-time person to handle plugging up these holes would be great, but if you're resource constrained paying Anthropic for tokens is a much faster/cheaper solution in the short term.
Yup, LLMs are rocking for smaller more greenfield stuff like this. As long as you can get your results in 5-10 interactions with the bot then it works really well.
They seem to fall apart (for me, at least) when the projects get larger or have multiple people working on them.
They're also super helpful for analytics projects (I'm a data person) as generally the needed context is much smaller (and because I know exactly how to approach these problems, it's that typing the code/handling API changes takes a bunch of time).
In addition to never providing examples, the other common theme is when you dive into the author's history almost 100% of the time they just happen to work for a company that provides AI solutions. They're never just a random developer that found great use for AI, they're always someone who works somewhere that benefits from promoting AI.
In this author's case, they currently work for a company that .. wait for it .. less than 2 weeks ago launched some "AI image generation built for teams" product. (Also, oddly, the author lists himself as the 'Technical Director' at the company, working there for 5-6 years, but the company's Team page doesn't list him as an employee).
And his previous post is from 2024-01-10 and titles: "Rabbit R1 - The Upgraded Replacement for Smart Phones"
At my work I interview a lot of fresh grads and interns. I have been doing that consistently for last 4 years. During the interviews I always ask the candidates to show and tell, share their screen and talk about their projects and work at school and other internships.
Since last few months, I have seen a notable difference in the quality and extent of projects these students have been able to accomplish. Every project and website they show looks polished, most of those could be a full startup MVP pre AI days.
The bar has clearly been raised way high, very fast with AI.
I’ve had the same experience with the recent batch of candidates for a Junior Software Engineer position we just filled. Their projects looked impressive on the surface and seemed very promising.
Once we got them into a technical screening, most fell apart writing code. Our problem was simple: using your preferred programming language, model a shopping cart object that has the ability to add and remove items from the cart and track the cart total.
We were shocked by how incapable most candidates were in writing simple code without their IDEs tab completion capability. We even told them to use whatever resources they normally used.
The whole experience left us a little surprised.
In my opinion, it has always been the “easy” part of development to make a thing work once. The hard thing is to make a thousand things work together over time with constantly changing requirements, budgets, teams, and org structures.
For the former, greenfield projects, LLMs are easily a 10x productivity improvement. For the latter, it gets a lot more nuanced. Still amazingly useful in my opinion, just not the hands off experience that building from scratch can be now.
As others have said, the benefit is speed, not quality. And in my experience you get a lot more speed if you’re willing to settle for less quality.
But the reason you don’t see a flood of great products is that the managerial layer has no idea what to do with massively increased productivity (velocity). Ask even a Google what they’d do with doubly effective engineers and the standard answer is to lay half of them off.
> if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed.
The headline gain is speed. Almost no-one's talking about quality - they're moving too fast to notice the lack.
I find these agents incredibly useful for eliminating time spent on writing utility scripts for data analysis or data transformation. But... I like coding, getting relegated to being a manager 100%? Sounds like a prison to me not freedom.
That they are so good at the things I like to do the least and still terrible at the things at which I excel. That's just gravy.
But I guess this is in line with how most engineers transition to management sometime in their 30s.
> ... but also seemingly has nothing to show for it This x1000, I find it so ridiculous.
usually when someone hypes it up it's things like, "i have it text my gf good morning every day!!", or "it analyzed every single document on my computer and wrote me a poem!!"
The crazy pills you are taking is that thinking people have anything to prove to you. The C compiler that Anthropic created or whatever verb your want to use should prove that Claude is capable of doing reasonably complex level of making software. The problem is people have egos, myself included. Not in the inflated sense, but in the "I built a thing a now the Internet is shitting on me and I feel bad" sense. There's fundcli and nitpick on my GitHub that I created using Claude. fundcli looks at your shell history and suggests places to donate to, to support open source software you actually use. Nitpick is a TUI HN client. I've shipped others. The obvious retort is that those two things aren't "real" software; they're not complex, they're not making me any money. In fact, fundcli is costing me piles of money! As much as I can give it! I don't need anyone to tell me that or shit on the stuff I'm building.
The "open secret" is that shipping stuff is hard. Who hasn't bought a domain name for a side project that didn't go anywhere. If there's anybody out there, raise your hand! So there's another filtering effect.
The crazy pills are thinking that HN is in any way representative of anything about what's going on in our broader society. Those projects are out there, why do you assume you'll be told about it? That someone's going to write an exposé/blog post on themselves about how they had AI build a thing and now they're raking in the dollars and oh, buy my course on learning how to vibecode? The people selling those courses aren't the ones shipping software!
> The C compiler that Anthropic created or whatever verb your want to use should prove that Claude is capable of doing reasonably complex level of making software.
I don't doubt that an LLM would theoretically be capable of doing these sorts of things, nor did I intend to give off that sentiment, rather I was more evaluating if it was as practical as some people seem to be making the case for. For example, a C compiler is very impressive, but its clear from the blog post[0] that this required a massive amount of effort setting things up and constant monitoring and working around limitations of Claude Code and whatnot, not to mention $20,000. That doesn't seem at all practical, and I wonder if Nicholas Carlini (the author of the Anthropic post) would have had more success using Claude Code alongside his own abilities for significantly cheaper. While it might seem like moving the goalpost, I don't think it's the same thing to compare what I was saying with the fact that a multi billion dollar corporation whose entire business model relies on it can vibe code a C compiler with $20,000 worth of tokens.
> The problem is people have egos, myself included. Not in the inflated sense, but in the "I built a thing a now the Internet is shitting on me and I feel bad" sense.
Yes, this is actually a good point. I do feel like there's a self report bias at play here when it comes to this too. For example, someone might feel like they're more productive, but their output is roughly the same as what it was pre-LLM tooling. This is kind of where I'm at right now with this whole thing.
> The "open secret" is that shipping stuff is hard. Who hasn't bought a domain name for a side project that didn't go anywhere. If there's anybody out there, raise your hand! So there's another filtering effect.
My hand is definitely up here, shipping is very hard! I would also agree that it's an "open secret", especially given that "buying a domain name for a side project that never goes anywhere" is such a universal experience.
I think both things can be true though. It can be true that these tools are definitely a step up from traditional IDE-style tooling, while also being true that they are not nearly as good as some would have you believe. I appreciate the insight, thanks for replying.
[0]: https://www.anthropic.com/engineering/building-c-compiler
> I wonder if Nicholas Carlini (the author of the Anthropic post) would have had more success using Claude Code alongside his own abilities for significantly cheaper.
You're thinking like an individual, not a corporation. $20,000 is a lot of money for me to go and pay the bill for as an individual. That's a car for most of America! However, if I'm earning $20,000/year at my job, that's peanuts. Thus Mr. Carlini (whom surely makes vastly more than $20,000/year) being able to do, what previously would have taken a team of people to do, is nothing short of astounding. I don't know how well the compiler stacks up against, say clang or gcc, the real question is how much did it cost Intel to make v0.1 of icc.
> For example, someone might feel like they're more productive, but their output is roughly the same as what it was pre-LLM tooling.
There is just no comparison. It's not about how much faster it is, it's about could I have attempted this project before? Yes. Would I have attempted it? Probably not! The start up cost for a project was just so high that I've a list of things that I'd love to attempt but never made the time for. With AI, I'm slowly knocking things off that list (most of them don't actually go anywhere, but there's an itch to scratch, as a hobby).
> not nearly as good as some would have you believe.
Hallucinations from LLMs are interesting as a concept, but they can hardly be blamed for it as they learned to ability from humans. (Some) humans love to blow smoke up your ass in pursuit of the all mighty dollar. LLMs have their limitations. There's some prognostication about the future, but I'm interested in what they can do today.
Thank you for the thoughtful response!
If people make extraordinary claims, I expect extraordinary proofs…
Also, there is nothing complex in a C compiler. As students we built these things as toy projects at uni, without any knowledge of software development practices.
Yet, to bring an example for something that's more than a toy project: 1 person coded this video editor with AI help: https://github.com/Sportinger/MasterSelects
From the linked project:
> The reality: 3 weeks in, ~50 hours of coding, and I'm mass-producing features faster than I can stabilize them. Things break. A lot. But when it works, it works.
1 reply →
We're at the apex of the hype cycle. I think it'll die down in a year and we'll get a better picture of how people have integrated the tools
Even if it's not straight astroturfing I think people are wowed and excited and not analyzing it with a clear head
Making predictions about the future are always fascinating because you get to see what someone got wrong or right. You see this as the apex of hype, I think we're at the point before the exponential growth happens.
Exponential growth towards what? What's your 2 year prediction?
Matches my experience pretty well. FWIW, this is the opinion that I hear most frequently in real life conversation. I only see the magical revelation takes online -- and I see a lot of them.
LLMs have made a huge transformative change in my coding. For some projects 95% of the code is written by LLMs. This is all on internal projects and internal tools right now, though, because on the external projects I'm still easing into using it in a very carefully curated way, e.g. a method or an algorithm at a time, rather than a 10KLOC folder full of class files. These internal products are 95% of the work being done, though. It's just that they are under tight control when they are running locally and bugs and crashes are immediately visible and it's easy to debug and deploy fixes, unlike with say web-based stuff on a remote server.
So, I've very little to publicly show for all my obnoxious LLM advocacy. I wonder if any others are in the same boat?
> To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.
This is the challenge I also face, it's not always obvious when a change I want will be properly understood by the LLM. Sometimes it one shots it, then others I go back and forth until I could have just done it myself. If we have to get super detailed in our descriptions, at what point are we just writing in some ad-hoc "programming language" that then transpiles to the actual program?
“Emperor wore no clothes” moment.
Given time AI will lead to incredible productivity. In the meantime, use as appropriate.
I like to call it Canadian girlfriend coding.
Maybe it is language specific? Maybe LLMs have a lot of good JavaScript/TypeScript samples for training and it works for those devs (e.g. me). I heard that Scala devs have problems with LLMs writing code too. I am puzzled by good devs not managing to get LLM work for them.
I definitely think it's language specific. My history may deceive me here, but i believe that LLMs are infinitely better at pumping out python scripts than java. Now i have much, much more experience with java than python, so maybe it's just a case of what you don't know.... However, The tools it writes in python just work for me, and i can incrementally improve them and the tools get rationally better and more aligned with what i want.
I then ask it to do the same thing in java, and it spends a half hour trying to do the same job and gets caught in some bit of trivia around how to convert html escape characters, for instance, s.replace("<", "<").replace(">", ">").replace("\"").replace("""); as an example and endlessly compiles and fails over and over again, never able to figure out what it has done wrong, nor decides to give up on the minutia and continue with the more important parts.
Maybe it's because there's no overall benefit to these things.
There's been a lot of talk about it for the past few years but we're just not seeing impacts. Oh sure, management talk it up a lot, but where's the corresponding increase in feature delivery? Software stability? Gross profit? EBITDA?
Give me something measurable and I'll consider it.
I think LLMs have a hard time with large code bases (obviously so do devs).
A giant monorepo would be a bad fit for an LLM IMO.
With agentic search, they actually do pretty well with monorepos.
I feel the same way, but I'm not too dismissive of it in public because I haven't given too much dollars to the gold rush shovel sellers to really try the best models.
I'm mostly a freeloader, so how could I judge people who put in the tokens equivalent to 15 years worth of electricity (incl heating and hot water) bills for my home in a C compiler?
Well, I can see that Anthropic is still an AI company, not a software company, they're granting us access to their most valuable resource that almost doesn't require humans, for a very reasonable fee, allowing us to profit instead of them. They're philanthropists.
I’m working on a solo project, a location-based game platform that includes games like Pac-Man you play by walking paths in a park. If I cut my coding time to zero, that might make me go two or three times faster. There is a lot of stuff that is not coding. Designing, experimenting, testing, redesigning, completely changing how I do something, etc. There is a lot more to doing a project than just coding. I am seeing a big speed up, but that doesn’t mean I can complete the project in a week. (These projects are never really a completed anyway, until you give up on it).
I think it’s just very alien in that things which tend to be correlated in humans may not be so correlated in LLMs. So two things that we expect people to be similarly good at end up being very different in an AI.
It does also seem to me that there is a lot of variance in skills for prompting/using AI in general (I say this as someone who is not particularly good as far as I’m aware – I’m not trying to keep tips secret from you). And there is also a lot of variance in the ability for an AI to solve problem of equal difficulty for a human.
I like it because it lets me shoot off a text about making a plot I think about on the bus connecting some random data together. It’s nice having Claude code essentially anywhere. I do think that this is a nice big increment because of that. But also it suffers the large code base problems everyone else complains about. Tbh I think if its context window was ten times bigger this would be less of an issue. Usually compacting seems to be when it starts losing the thread and I have to redirect it.
> To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions ...
What makes the difference is that agents can create these instructions themselves and monitor themselves and revert actions that didn't follow instructions. You didn't fet there because you achieved satisfactory results with semi-manual solutions. But people who abhor manual are getting there already.
Completely agree. However I do get some productivity boost by using ChatGPT as an improved Google search able to customize the answer to what I need.
I'd be curious if a middle layer like this [0] could be helpful? I've been working on it for some time (several iterations now, going back and forth between different ideas) and am hoping to collect some feedback.
[0] https://github.com/deepclause/deepclause-sdk
The main difference could be that you have an existing code base (probably quite extensive and a bit legacy?). If the llm can start from scratch it will write code “in its own way”, that it can probably grasp and extend better than what is already there. I even have the impression that Claude can struggle with code that GPT-5 wrote sometimes.
I think the main thing is, these are all green fields projects. (Note original author talking about executing ideas for projects.)
I remember when Anthropic was running their Built with Claude contest on reddit. The submissions were few and let's just say less than impressive. I use Claude Code and am very pro-AI in general, but the deeper you go, the more glaring the limitations become. I could write an essay about it, but I feel like there's no point in this day and age, where floods of slop in fractured echo chambers dominate.
From what I get out of this is that these models are trained on basic coding and not enterprise level where you have thousands and thousands of project files all intertwined and linked with dependencies. It didn’t have access to all of that.
> it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right
I used this line for a long time, but you could just as easily say the same thing for a typical engineer. It basically boils down to "Claude likes its tickets to be well thought out". I'm sure there is some size of project where its ability to navigate the codebase starts to break down, but I've fed it sizeable ones and so long as the scope is constrained it generally just works nowadays
The difference is a real engineer will say "hey I need more information to give you decent output." And when the AI does do that, congrats, the time you spend identifying and explaining the complexity _is_ the hard time consuming work. The code is trivial once you figure out the rest. The time savings are fake.
That real engineer knows decent. This parrot knows only its own best (current attempt).
Indeed wrote something similar few weeks ago https://news.ycombinator.com/item?id=46665366
It's like CGP Grey hosting a productivity podcast despite his productivity almost certainly going down over time.
It's the appearance of productivity, not actual productivity.
I always find that characterization of Grey and the Cortex podcast to be weird. He never claims to be a productivity master or the most productive person around. Quite the opposite, he has said multiple times how much he is not naturally productive, and how he actually kinda dislikes working in general. The systems and habits are the ways he found to essentially trick himself into working.
Which I think is what people gather from him, but somehow think he's hiding it or pretending is not the case? Which I find strange, given how openly he's talked about it.
As for his productivity going down over time, I think that's a combination of his videos getting bigger scopes and production values, and also he moving some of his time into some not so publicly visible ventures. E.g., he was one of the founders of Standard, which eventually became the Nebula streaming service (though he left quite a while ago now).
> Which I think is what people gather from him, but somehow think he's hiding it or pretending is not the case? Which I find strange, given how openly he's talked about it.
Well the person you're responding to didn't say anything like that. They're saying he's unqualified.
> The systems and habits are the ways he found to essentially trick himself into working.
And do they work? If he's failing or fooling himself then a big chunk of his podcasting is wasting everyone's time.
> videos getting bigger scopes and production values
I looked at a video from last year and one from eight years ago and they're pretty similar in production value. Lengths seem similar over time too.
> moving some of his time into some not so publicly visible ventures
I can see he's done three members-only videos in the last two years, in addition to four and a half public videos. Is there anything else?
3 replies →
So you're walking into this hoping that it's an actual AI and not just an LLM?
interesting.
how much planning do you put into your project without AI anyway?
Pretty much all the teams I've been involved in:
- never did any analysis planning, and just yolo it along the way in their PR - every PR is an island, with tunnel vision - fast forward 2 years. and we have to throw it out and start again.
So why are you thinking you're going to get anything different with LLMs?
And plan mode isn't just a single conversation that you then flip to do mode...
you're supposed to create detailed plans and research that you then use to make the LLM refer back to and align with.
This was the point of the Ralph Loop
well you might experience the same thing with a junior developer, but in the end the effort of training the junior is worth it, no? only because you're developing a human? i have to say doing the work instead of the junior, because the junior makes mistakes is not a good route. so taking time to teach the agent? maybe worth it...
Frankly, it sounds like you have a lot to learn about agentic coding. It’s hard to define exactly what makes some of us so good at using it, and others so poor, but agentic coding has been life changing for myself and the folks I’ve tutored on its use. We’re all using the same tools, but subtle differences can make a big difference.
The pattern matching and absence or real thinking is still strong.
Tried to move some excel generation logic from epplus to closedxml library.
ClosedXml has basically the same API so the conversion was successful. Not a one-shot but relatively easy with a few manual edits.
But closedxml has no batch operations (like apply style to the entire column): the api is there but internal implementation is on cell after cell basis. So if you have 10k rows and 50 columns every style update is a slow operation.
Naturally, told all about this to codex 5.3 max thinking level. The fucker still succumbed to range updates here and there.
Told it explicitly to make a style cache and reuse styles on cells on same y axis.
5-6 attempts — fucker still tried ranges here and there. Because that is what is usually done.
Not here yet. Maybe in a year. Maybe never.
Fascinating!
Yeah I have the same problem where it always uses smart quotes which messes up my compile. 8 told ChatGPT not to use them but it keeps doing it.
It's almost as if being able to generate boilerplate code is only like 5% of software development.
That being said, its great at generating boilerplate code or in my case, doing something like 'make a react component here please that does this small thing, and is aligned with the style in the rest of the file'. Good for when I need to work with code bases or technologies that are not my daily. Also a great research assistant.
But I guess being a 'better google' or a 'glorified spellchecker' doesn't get that hype money.
I'm curious what types of tasks you were delegating to the coding agents?
I think unpopularly there's some fake comments in the discourse led by financial incentives, and also a mix of some fear-based "wanting to feel like things are OK" or dissonance-avoiding belief around this thats leading to the opinions we hear.
It also kinda feels gaslightish and as I've said in some controversial replies in other posts, its sort of eerily mass "psychosis" vibes just like during COVID.
Everyone claiming AI is great is trying to make money by being on the leading edge.
All AI-IS-WONDERFUL stories are garbage-trash written by garbage people.
Fuck AI. Fuck HN AI promoters. Hopefully you all lose your jobs and fail in life.
You can see how the bubble is about to pop up, by the number of times Jensen Huang has to show up on CNBC pumping the stock.
Hardly before, now its almost three times a week. And never gets any questions on GPU amortization...
[dead]