← Back to context

Comment by Euphorbium

8 days ago

Smells exactly like llm created solution.

Or just what happens when you hire a bunch of 20 year olds and let them loose.

That's currently how I model my usage of LLMs in code. A smart veeeery junior engineer that needs to be kept on a veeeeery short leash.

  • Yes. LLMs are very much like a smart intern you hired with no real experience who is very eager to please you.

    • IMO, they're worse than that. You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.

      LLMs are an eternal intern that can only repeat what it's gleaned from some articles it skimmed last year or whatever. If your expected response isn't in its corpus, or isn't in it frequently enough, and it can't just regurgitate an amalgamation of the top N articles you'd find on Google anyway, tough luck.

      6 replies →

  • Even at 20 years old I would not have done this.

    • The difference is that today's digital natives regard computers as magic and most don't know what's really happening when their framework du jour spits out some "unreadable" text.

    • So much this, I was interning at a government entity at 20 and I already knew you needed credentials to do shit. Most frameworks have this by default for free, we're so incredibly screwed with these folks running rampant and destroying the government.

  • One who thinks "open source" means blindly copy/pasting code snippets found online.

  • It's definitely both. A bunch of 20 year olds were let loose to be "super efficient." So, to be efficient they use LLMs to implement what should be a major government oversight webpage. Even after the fix the list is a few half-baked partial document excerpts with a few sentences saying, "look how great we are!" It's embarrassing.

Does it? At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Maybe they used Grok ;P

  • > At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

    Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.

Does it, though? The saying says we shouldn't mistake incompetence for malice, but that requires more than usual for Musk's retinue.

Smells like getting a backdoor in early.