← Back to context

Comment by bdangubic

6 days ago

they can't be aware of the latest changes in the frameworks I use, and so force me to use older features, sometimes less efficient

of course they can, teach them / feed them latest changes or whatever you need (much like another developer unaware of the same thing)

they fail at doing clean DRY practices even though they are supposed to skim through the codebase much faster than me

tell them it is not DRY until they make it DRY. for some (several projects I’ve been involved with) DRY is generally anti-pattern when taken to extremes (abstraction gone awry etc…). instruct it what you expect and it and watch it deliver (much like you would another developer…)

they bait me into inexisting apis, or hallucinate solutions or issues

tell it when it hallucinates, it’ll correct itself

they cannot properly pick the context and the files to read in a mid-size app

provide it with context (you should always do this anyways)

they suggest to download some random packages, sometimes low quality ones, or unmaintained ones

tell it about it, it will correct itself

Anecdotally, ChatGPT still struggles with its own API. It keeps juggling between different versions of its API and hallucinates API parameters, even when I force-feed official documents into the context (to be fair, the documentation is straight awful). Sometimes it totally refuses to change its basic assumptions, so I have to blow up the context just to make it use the up-to-date API correctly.

LLMs are stupid - nothing magic, nothing great. They’re just tools. The problem with the recent LLM craze is that people make too many obviously partially true statements.

> tell it when it hallucinates, it’ll correct itself

no it doesn't. Are you serious?

  • just today 3 times and countless times before… you just gotta take some serious time to learn and understand it… or alternatively write snarky comments on the internet…

    • So when LLMs go around in circles, as it often does [1], that's a skill issue. But when it gets it right some of the time, that's proof of superiority.

      This is the kind of reasoning that dominates LLM zealotry. No evidence given for extraordinary claims. Just a barrage of dismissals of legitimate problems. Including the article in discussion.

      All of this makes me have a hard time taking any of it seriously.

      [1]: https://news.ycombinator.com/item?id=44050152

    • intresting. for me it just keeps making up new stuff that doesn't exist when i feed it the error and telling it hallucinates.

      perhaps ppl building crud webapps have different experience than ppl building something niche?

      1 reply →