Comment by jaccola
18 hours ago
Yeah I don’t know what’s true when reading about LLMs. Same with comments here on hacker news. So much money on the line it’s clear they would seed communities with marketing shills (and some people are just tribal).
Same since they own Bun, they have every incentive to make this seem easier than it was.
This is a huge problem regarding the specifics of ai. Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines more and more.
Influencers are getting paid to promote ai for 10s of thousands of USD. This is one the reasons social media has been swamped with it lately.
This! I now have to fight bad tech decisions at my companies because many devs follow influencers.
Look also at the hate spread against UE5… It’s everywhere and half of the arguments are falsehoods made by influencers with no real experience in the industry…
Yes, some of the latest campaigns:
https://news.ycombinator.com/item?id=47945021
There were earlier initiatives from the industry. This is just what is in the open and does not even include automated LLM "influencers".
1 reply →
> Tech is becoming very adversarial as a worker, since marketing and technical information are blurring lines
Since one of LLM's largest market (with product fit) is us developers, we are experiencing what the crypto bros did to others.
You can just use AI for yourself and see. It isn't some mysterious product that only a few people get to use.
This is the thing. I do use LLMs (mostly Anthropic).
It just does not generate good useable code. I have to review every single change to a higher degree than I would my own code because it likes to slip in hidden nasties. I have to rewrite at least 50% of what it generates.
That being said, I know devs who swear that they don’t even write code anymore. Like this rust port. I can’t even fathom blindly merging something his massive.
I think we're still seeing pretty wild variance in how effective LLMs can be for code, depending on who is driving it. I've seen some folks getting themselves into messes pretty regularly with LLMs. But, ever since Opus 4.5, it's been pretty obviously better to work with it than without it, remarkably better in some use cases. Porting an application with source available and a huge existing test suite is pretty much the ideal use case for an LLM. It has everything it needs to succeed. I can't imagine why anyone would embark on a porting effort without an LLM at this point.
While this is true, it's also true that few people have the budget to spend a bunch of tokens on porting bun over to rust.
And yet we have stories[0] of companies judging merit on tokens used.
Rather than using these tokens to do rewrites that have the potential to massively improve the day to day, they're just burnt for the sake of burning them.
It's individual initiative, and company culture that are at play as much as budget.
0: https://news.ycombinator.com/item?id=48110529
1 reply →
Most people do use LLMs, which is why they have the so-called pessimistic opinions they do.
Judging by most public comments, people are really mediocre at using them. I don't get how it's possible to get such poor results from them.
1 reply →
I'm not sure it matters what anyone claims. It's easy to use and experience its abilities and limitations.
The truth lies somewhere in the middle.
Context: 20 years coding, 13-ish of which professional. Using LLMs for side projects, including a very big one. Also using them to help manage our home server.
I’ve used 20-ish agents with OpenRouter, Google’s own AGY, Mistral’s Vibe, and Claude Code. The good ones are good and can be very helpful with spec’ing work or handling repetitive tasks. Except for Opus 4.6, none of them produce TypeScript that I’d be super proud of; but they write stuff that’s good enough compared to what I’ve seen in the industry. It’s always some mix of spaghetti and shortcuts. That’s fine, you steer the model and tighten your specs and tests.
Anyone claiming ‘Model X can one-shot’ an app is delusional about maintainability, deployment, all the little things that grease the wheels. Anyone claiming ‘LLMs are useless’ is probably not being impartial. That’s it.
And any company claiming AI is awesome at everything and will replace everyone? Yeah, they’re lying, at least about their capabilities as of right now.