Comment by posix_monad
7 months ago
LLMs and astro-turfing have ruined that approach. I honestly don't know where to get information from these days.
7 months ago
LLMs and astro-turfing have ruined that approach. I honestly don't know where to get information from these days.
Get information from llms after learning how to prompt them so that they won't hallucinate. Get information from searches by using llms to filter through the crap results. Get information from scientific papers on Google scholar and the arxiv. Get information from textbooks on the library Genesis. Get information from audiobooks on the audiobook Bay. Get information from peers trained in specific domains. Get information by reading code and documentation belonging to open source projects. Get information by performing experiments and trials. Get information by compiling reports and essays.
There are still many sources for information. And it's okay to work hard for it.
Good luck and Happy knowledge work.
> Get information from llms after learning how to prompt them so that they won't hallucinate.
That is structurally impossible, because LLMs have no mechanism of knowing which answer is right or wrong. Please provide information how this prompting is supposed to look like.
> This is structurally impossible
False.
The mechanisms include examples/in context learning (ICL), feedback and validation loops/tool using, backtracking/conversation trees/rejection sampling, editorial oversight/managerial correction, document assisted reasoning, and having well defined and well documented high level processes, workflows, and checklists.
So basically the same things you need to do for managing other people while covering your own ass.
You are still very much in the loop, and you never ever use output you don't approve and fact check. But you also give it the references and examples it needs to improve its accuracy, and give it feedback/iterate on problems until they're really solved.
Modern LLMs like GPT4, Claude 3 Opus, and Gemini 1.5 no longer have the cascading hallucination problem. If there is a hallucination/mistake, you can backtrack with a better prompt and eliminate it, or just correct it in context. Then, unlike with GPT 3.5, there's a good chance it'll run with the correction without immediately making further mistakes.
Work with it the way you would work with a junior subordinate who can do good work if you help them, but doesn't realize when they do bad work unless you help them a little more. Ensure that it doesn't matter if they make mistakes, because together you fix them, and they still help you work much faster than you could do it on your own.