Comment by WA

7 months ago

> Get information from llms after learning how to prompt them so that they won't hallucinate.

That is structurally impossible, because LLMs have no mechanism of knowing which answer is right or wrong. Please provide information how this prompting is supposed to look like.

> This is structurally impossible

False.

The mechanisms include examples/in context learning (ICL), feedback and validation loops/tool using, backtracking/conversation trees/rejection sampling, editorial oversight/managerial correction, document assisted reasoning, and having well defined and well documented high level processes, workflows, and checklists.

So basically the same things you need to do for managing other people while covering your own ass.

You are still very much in the loop, and you never ever use output you don't approve and fact check. But you also give it the references and examples it needs to improve its accuracy, and give it feedback/iterate on problems until they're really solved.

Modern LLMs like GPT4, Claude 3 Opus, and Gemini 1.5 no longer have the cascading hallucination problem. If there is a hallucination/mistake, you can backtrack with a better prompt and eliminate it, or just correct it in context. Then, unlike with GPT 3.5, there's a good chance it'll run with the correction without immediately making further mistakes.

Work with it the way you would work with a junior subordinate who can do good work if you help them, but doesn't realize when they do bad work unless you help them a little more. Ensure that it doesn't matter if they make mistakes, because together you fix them, and they still help you work much faster than you could do it on your own.