Comment by DonHopkins
3 hours ago
That's simply not true. That's just not the way LLMs work. LLMs are not magic.
LLMs are stateless, they don't "remember" your bespoke programming language manual and examples between completion calls, so you have to repeatedly include all that with each and every completion call, which balloons the number of tokens used, reduces how much useful work you can do with the remaining tokens and attention, and is a costly waste of tokens and electricity and money.
That isn't anywhere near as effective or efficient as using the LLM's pre-existing training on billions of lines of well known programming languages, manuals, tutorials, examples, code bases, stack overflow discussions, books, github repos, pr's, etc.
What is your extraordinary evidence for your extraordinary claims? Have you empirically measured how well it works, or is it just vibes and handwaving?
No comments yet
Contribute on Hacker News ↗