Comment by bwfan123
5 days ago
agree - it is written like clickbait or worse like a sponsored piece.
> But “hallucination” is the first thing developers bring up when someone suggests using LLMs, despite it being (more or less) a solved problem.
really ? what is the author smoking to consider it a solved problem ? This statement alone invalidates the entire article in its casual irreverence for the truth.
I use copilot everyday, and I know where it shines. Please dont try to sell it to me with false advertising.
The article specifically says it's not talking about copilot, but talking about agents that verify the code compiles before they show it to you.
If it uses a function, then you can be sure that function is real.
Was this not clear? The explanation I'm paraphrasing is right in between the line Aurornis quoted and the line you quoted. Except for the crack at copilot that's up at the top.
Have you read the hilarious PRs that copilot put out last week ? it is here for your reference [1]. The humor is in the giant gap between what it can do, and what the hype says it can do.
Can you show me 1 PR put out by any agent in any open-source repo with wide usage ?
[1] https://www.reddit.com/r/ExperiencedDevs/comments/1krttqo/my...
An instance of dogfooding, but supposedly the last release of Aider [1] had the agent write 79% of its code.
[1]: https://github.com/Aider-AI/aider/blob/main/HISTORY.md#aider...