Comment by emporas
10 months ago
It works very well (99.9%), when the problem resides at a familiar territory of the user's knowledge. When i know enough about a problem, i know how to decompose it into smaller pieces, and all (most?) smaller pieces have been already solved countless of times.
When a problem is far outside of my understanding, A.I. leads me towards a wrong path more often than not. Accuracy is terrible, because i don't know how to decompose the problem.
Jargon plays a crucial role there. LLM's need to guided using as much correct jargon of the problem as possible.
I have done this for decades on people. I read a book at some point that the most sure way for people to like you, is to speak to them in words they usually use themselves. No matter the concepts they are hearing with their ears, if the words belong belong in their familiar vocabulary they are more than happy to discuss anything.
So when i meet someone, i always try to absorb as much of their vocabulary as possible, as quickly as possible, and then i use it to describe ideas i am interested in. People understand much better like that.
Anyway, the same holds true for LLM's, they need to hear the words of the problem, expressed in that particular jargon. So when a programmer wants to use a library, he needs to absorb the jargon used in that particular library. It is only then that accuracy rates hit many nines.
No comments yet
Contribute on Hacker News ↗