Comment by heavyset_go
8 days ago
IMO, they're worse than that. You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.
LLMs are an eternal intern that can only repeat what it's gleaned from some articles it skimmed last year or whatever. If your expected response isn't in its corpus, or isn't in it frequently enough, and it can't just regurgitate an amalgamation of the top N articles you'd find on Google anyway, tough luck.
The Age of the Eternal Intern
LLMs are to interns what house cats are to babies. They seem more self sufficient at first, but soon the toddler grows up and you're stuck with an animal who will forever need you to scoop its poops.
And the content online is now written by Fully Automated Eternal September
Today is Friday the 11490th of September 1993.
Without a mechanism to detect output from LLMs, we’re essentially facing an eternal model collapse with each new ingestion of information from academic journals, to blogs, to art. [1][2]
[1] https://en.m.wikipedia.org/wiki/Model_collapse
[2]https://thebullshitmachines.com/lesson-16-the-first-step-fal...
> You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.
You can't do the same way you do with a human developer, but you can do a somewhat effective form of it through things like .cursorrules files and the like.